| Consultation: | Winter General Meeting 2026 |
|---|---|
| Agenda item: | 3. Motions of Policy and Organisation |
| Proposer: | Oliver Ashton (Green Party) |
| Status: | Published |
| Submitted: | 01/23/2026, 23:59 |
B10: Use of AI in Elections
Motion text
1.Candidates are discouraged from using generative artificial intelligence
(AI) within their campaigns. This includes campaigning material, social
media posts, speeches, official communications and their candidate
statements.
2. Any material, media or communications should be made by candidates or
their supporting campaigners. Candidates who are believed to have used
generative AI in their campaigns will be asked by the ERO to remove and/or
change the material, media or communications. Any member can raise
concerns over use of AI in campaigns by candidates and it is up to the ERO
to investigate the claims.
3. The ERO may identify possible AI usage through: irregular wordings and
anomalous language, images with known traits of generative art or by other
means the ERO deem fit. The ERO may request proof that the media was
created by the candidate.
4. A record of rulings made by the ERO should be kept by the DAC with the
alleged candidate named in the record and the whistleblower left
anonymous. These records shall be made publicly available to all members.
Reason
To discourage the use of AI in internal elections. The membership deserve to know that the campaign ran by a candidate is real and that policies by a candidate are of their own mind.
Supporters
- Jemima Gayfer-Thoms
- Alfie Neumann
Comments
Cynthia Muthoni:
Darius Seago:
I completely agree with us being against using AI to completely write a statement or a campaign message
Manu Teague-Sharpe:
Saying all this, I do support this motion and we perhaps change aspects of it in the future.
Samuel Hall:
I strongly support the aim of fairness and authenticity in internal elections. However, I have serious concerns about how this motion could be enforced in practice.
The boundary between “AI-generated” and “human-created” content is becoming increasingly blurred. Almost all modern software now incorporates some form of AI, including widely used tools such as Google’s spellcheck, grammar correction, and accessibility features. This makes it extremely difficult to define what should and should not be considered “AI use” in a meaningful way.
Related to this, “spotting AI-generated content” is becoming increasingly difficult, and there is currently no reliable technical method for doing so. In practice, most detection relies on informal “rules of thumb” and personal judgement. These are highly subjective and vulnerable to bias.
Allowing candidates to be investigated, sanctioned, or potentially disqualified based on opinions such as “this looks like it might have been made using AI” sets a very dangerous precedent. It risks unfair accusations, reputational harm, and inconsistent enforcement.
Even experts in this field cannot reliably determine the origin of most digital content. Building a regulatory system around such uncertain evidence is therefore both impractical and unjust.
Rather than relying on unreliable detection methods, I would encourage a focus on transparency, honesty, and equal access to campaign resources, regardless of whether AI tools are used.