C is for for 'Consent and Control: Facial Recognition, Deepfakes, and School Life in the UK.'
- hello25051
- Oct 14
- 6 min read

Across schools, a new concern is emerging around AI 'enhancements' and digital retouching of school photos. Parents have begun to question whether editing a child’s image-smoothing skin, whitening teeth, or subtly altering facial features-crosses the line between presentation and distortion. In one widely discussed case reported by the New York Post (2021)*1, families expressed outrage after discovering that professional photographers had digitally altered pupil's faces without parental consent. In the UK, similar discussions are growing: parents and educators are debating how far technology should go in reshaping children’s appearances, and whether such edits-especially when shared online-risk harming body image, authenticity, and a child’s right to consent. Most of these edited versions exist as copies shared online, rather than changes to original photos stored safely on devices like iPhones, but they still raise vital questions about ethics, privacy, and oversight in school photography.
Recent online experiences - such as 'cross-eyed' photo filters, AI-driven school photo enhancements, and deepfake bullying - show how quickly artificial intelligence and facial editing tools have entered everyday life. In the UK, these incidents reported typically involve edited versions shared online rather than direct alterations to original images on camera rolls or personal devices such as iPhone.
This article explores how these technologies are affecting young people, outlines the current use of facial recognition in the UK across policing, government, cultural and creative sector use and highlights online official resources for reference and safety.

Trending Online Experiences
Face-morph filters and appearance-based bullying
Social media platforms are flooded with filters that distort faces - widening eyes, crossing them, or changing mouth shapes.*5 These effects, first seen as 'fun' have been widely criticized across the UK.
The 2025 'chubby filter' sparked backlash and was removed after complaints that it promoted fat-shaming.*2
Parents and researchers note a growing link between these filters and bullying in schools.*14

School photo 'enhancements'
Across UK schools, parents have noted automatic retouching of school photographs without consent. While marketed as 'enhancement,' these edits have raised questions about body image, consent, and authenticity.
Deepfake bullying in schools
Deepfake bullying - using AI to fabricate nude or humiliating images of classmates - has become a serious issue. UK and global reports *3 (Los Angeles Times) confirm real-world incidents where students have been targeted and extorted. It’s fast, viral, and devastating - forcing schools to strengthen online safety responses and police to treat it as a criminal matter.
Facial Recognition in the UK
Facial recognition technology (FRT) is used by UK and US police forces, retailers, and creative industry projects.*13

Law enforcement and corporate use
Metropolitan Police and South Wales Police use Live Facial Recognition (LFR) to identify wanted suspects; the Home Office maintains transparency pages with deployment records and results.*4
Southern Co-op supermarkets have trialed FRT for store security, facing a legal challenge from Big Brother Watch for privacy concerns.
In May 2022, the UK Information Commissioner’s Office (ICO) fined Clearview AI £7.5 million for scraping images of UK residents from the web without consent. The Upper Tribunal upheld this enforcement in October 2025.*6
Cultural and creative sector use
Within UK government-backed CreaTech programmes *7, a multi billion pound investment generating novel products, services and experiences, facial recognition is used:
StoryFutures (Royal Holloway in Egham, Surrey) - £6.4m project under the £56m Creative Industries Clusters Programme, maps facial expressions to avatars and performances. (2018 to 2023).*8
Creative Informatics (Edinburgh) explores “emotional AI” in arts and public engagement. (e.g., public engagement and prototype work).*
*7 - above data courtesy of https://royalanniversarytrust.org.uk/wp-content/uploads/2025/02/CreaTech-Report.pdf
pg 48 - (Top Left) Fundraising by UK based CreaTech Companies - 2021 reached 5 billion pounds
pg 44 - ( Top Right) Creative Industries and CreaTech Companies in the UK
pg 27 - (bottom left) Ethnicity of people working in CreaTech - 90.41% Ethnicity White
pg 47 - ( bottom right) 576 (4.2%) of UK CreaTech companies are Ed - Tech Companies
Cultural implications
Facial recognition raises deep ethical and cultural questions. In a sociopathic or dystopian sense, such systems could identify doppelgängers a.k.a lookalikes, raising concerns about potential intrusions into personal identity and privacy.*13 Where such facial recognition technology is relatively inexpensive to build, develop, and deploy across multiple sectors, cities like San Francisco have banned its police use, comparing it to “Big Brother” surveillance.*10 In the UK, Sir Peter Bazalgette, who brought the Big Brother TV show to Britain, co-chairs the Creative Industries Sector Taskforce, helps shape the same creative-technology sector developing novel products, services and experiences. He is also a member of UK Education committee since May 2022.*11
Global Context
Denmark is banning mobile phones in schools and after-school clubs from 2025, citing child wellbeing, focus, and mental health.*17
In the US, Oregon and other states have tested Amazon’s face recognition system Rekognition ,*15 while San Francisco and other cities prohibit law-enforcement use of FRT altogether.*13
The UK ICO highlights a case in which Head Teacher Tom Sparks of Chelmer Valley High School trialed facial recognition - and emphasizes the importance of legal, ethical and oversight controls when implementing such systems in educational settings.*18
Why Guardrails Matter
The UK Parliament’s Joint Committee on Human Rights (2019) *16 declared that “the consent model is broken” for online privacy. This means ticking a box isn’t enough to protect families against complex technologies.
From classroom photo filters to live facial recognition, rapidly unleashed technology is reshaping childhood, privacy, and education.
We understand that consent alone is not enough- especially for children- and are actively looking for an ongoing framework to safeguard online. This has included adjusting privacy-by-default settings on telephones/tablets, discussing strong lawful bases on photo sharing and encouraging the children to discuss photo sharing. We also research DPIAs (parents can check how schools or apps protect children’s data by asking if they’ve completed a Data Protection Impact Assessment (DPIA), understanding child veto of school related biometrics prior to thirteen years old and that parental consent is needed for school biometrics. We have started to collate and research guardrails against surveillance and synthetic images in gaming communities and listing fast, well-known reporting routes (CEOP, image abuse to the Revenge Porn Helpline, or serious threats directly to 999 or 101).
After reading reports such as the New York Post (2021)*1, parents, schools, and official photographers must work together to protect children’s images. We are continuously researching stronger safeguarding practices and encourage all schools to involve their Data Protection Officer (DPO) in reviewing contracts with school photographers. Even officially appointed photographers must meet the highest data protection and safeguarding standards.
As parents, we have the right to:
Know who holds your child’s images
Set clear limits on consent
Expect privacy, security, and transparency in every step of the process.
Guardrails must come first - before the next generation grows up thinking surveillance and synthetic images are considered a normal part of daily life.
For further information;
ICO – ico.org.uk: facial recognition, biometrics & privacy guidance.
Ofcom – ofcom.org.uk: Online Safety Act updates & investigations.
UK Safer Internet Centre – saferinternet.org.uk: parent and school guides.
Met Police LFR page – met.police.uk/facial-recognition for current deployments.
The Protection of Freedoms Act 2012 requires parental consent and a pupil veto before any biometric data (like fingerprints or facial recognition) is used.
Schools must complete a Data Protection Impact Assessment (DPIA) and follow ICO biometric guidance.
For designers and tech projects:
Build with data protection by design and default, not just user consent.
Use a lawful basis appropriate to risk — biometrics require explicit justification and safeguards.
Resource / Game | Focus Area | Age Group | Link |
BBC Own It – “Real or Fake?” | Teaches how to recognise edited or misleading content; online safety tips for children. | 7–13 | |
UK Safer Internet Centre – Fake or Real Activities | Lesson plans and interactive exercises on spotting fake images and misinformation. | 8–16 | |
Common Sense Media – Digital Citizenship Games | Mini-games and videos to teach critical thinking about fake media and online behaviour. | 8–16 | |
Google Interland – Be Internet Awesome | Interactive world teaching children to identify scams, fake profiles, and AI trickery. | 7–13 | |
Thinkuknow by CEOP | Guides and activities for recognising online grooming, fake accounts, and digital risks. | 5–17 | |
Internet Matters – Online Fake News Hub | Tools and guides for parents to help children spot manipulated images and videos. | Parents & Teens | |
Google Fact Check Explorer | Shows how journalists verify photos and videos – good for older teens. | 13+ |
Appendix:
7 CreaTech
Graphs illustrated courtesy of: https://royalanniversarytrust.org.uk/wp-content/uploads/2025/02/CreaTech-Report.pdf
16 UK Parliament’s Joint Committee on Human Rights (2019)
18 https://ico.org.uk/for-the-public/ico-40/facial-recognition-technology-in-schools such as used by Head Teacher Tom Sparks at Chelmer Valley High School















Comments