The following table describes the status, the proposer and other metadata of this motion.
| Consultation: | Winter General Meeting 2026 |
|---|---|
| Agenda item: | 3. Motions of Policy and Organisation |
| Proposer: | Isaac Short (Durham Green Party) |
| Status: | Published |
| Submitted: | 01/15/2026, 09:34 |
Comments
Manu Teague-Sharpe:
Isaac Short:
Samuel Hall:
I strongly resonate with the concerns and values expressed here, particularly around ethics, environmental impact, and social responsibility. However, in its current form, I am concerned that this motion will be very difficult to translate into clear, actionable policy.
Firstly, the definition of “AI” used is extremely broad and vague. The definition — “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings” — could reasonably be interpreted to include technologies as simple as calculators. This makes meaningful regulation challenging.
A more useful definition would focus on the specific technologies that have driven modern AI adoption, particularly machine learning and neural networks. For example: software that produces outputs using stochastic, data-driven prediction methods, including technologies such as neural networks, large language models, and diffusion models. This would clearly distinguish modern AI systems from conventional deterministic software.
On environmental sustainability, I strongly agree with the emphasis on renewable energy and responsible cooling. These are among the strongest and most concrete elements of the motion, and they represent areas where meaningful regulation is genuinely achievable.
The sections on education and AI literacy are well intentioned, but I believe they underestimate the scale of the challenge. Teaching people “how to spot AI-generated content” is effectively an ongoing arms race against major technology companies. Similar challenges are currently addressed by specialist organisations such as GCHQ and related intelligence agencies. Expecting this to be comprehensively taught at KS2 level places a very significant burden on an already overstretched education system.
I also agree that AI-generated disinformation poses serious risks to democracy and personal lives. However, it is unclear why this concern is limited only to AI-generated disinformation. All disinformation is harmful, regardless of how it is produced, and policy should reflect this broader reality.
Overall, I feel this motion attempts to address too many complex issues at once under a loosely defined banner of “ethical AI.” Several points overlap or repeat, and this weakens its practical impact.
I would encourage a more focused approach, prioritising a smaller number of clearly defined, enforceable goals. For example:
Ensuring AI infrastructure is powered and cooled using renewable resources
Establishing clear accountability standards for developers and operators
Supporting transparent and responsible deployment in public services
By narrowing the scope and improving technical clarity, this motion could become far more effective in shaping meaningful and realistic policy.
Isaac Short:
I get your point around the definition used, but it is extremely difficult to find a precise enough definition that is widely accepted, hence why I used this one. I do agree it has potential to be misconstrued, but felt it was better than the alternative. I don't believe simply targeting LLMs and neural networks will be affective, as there are other technologies that people often class as "AI" that is harmful. This motion also only targets technologies that are harmful, so something like a calculator I believe would be outside the scope of this motion.
On the education point, I am also aware it is increasingly difficult to distinguish and separate AI generated media from human creation. However, I believe it to be sensible to call for children in secondary school to be given every possible tool to be able to do. What this is not calling for is every child to become an expert, thats obviously ridiculous, but instead point to things like Google's AI checker.
On disinformation, obviously all disinformation is bad, but this motion is focused on AI, and so only AI generated disinformation is targeted. If there is to be a broader focus on disinformation, that should come in the form of a separate motion focused on disinformation.
Thanks for the feedback on my motion, I am aware it may not be perfect but I believe there is no such thing as a perfect motion. I believe this to steer the Young Greens in the right direction on a critical topic. I hope this has addressed some of your concerns.
Bronwen Hopkins:
Isaac Short: