Oxford University Press has urged schools to strengthen pupils’ AI literacy after a new survey found that less than half of students feel confident spotting reliable AI-generated information. The study, which polled 2,000 pupils across the UK, reports that only 47 per cent believe they can identify trustworthy content produced by artificial intelligence. The finding highlights a growing risk as generative AI becomes a routine part of homework, revision, and online research. OUP says schools need to equip students with the skills to judge accuracy, spot bias, and verify sources in an information landscape that now includes machine-written text, AI-made images, and synthetic audio. The publisher frames the result as a call to action for education leaders, teachers, and families who want technology to support learning without causing confusion or spreading misinformation.
Context and timing
OUP published the survey findings in the UK on 15 October 2025. The research focuses on how pupils engage with AI tools and how confident they feel about judging the reliability of AI-generated material.
Survey highlights a confidence gap in AI literacy
The headline figure exposes a confidence gap at the heart of the AI debate in schools. With only 47 per cent of pupils saying they feel confident in identifying trustworthy AI content, roughly half of students either doubt their ability or do not feel ready to check claims made by AI tools. Confidence does not equal accuracy, but the result signals that many pupils want—or need—more structured guidance to judge information that looks credible at first glance.
The survey’s size, at 2,000 pupils, gives the finding weight across a range of school types and regions. It also points to a core challenge for teachers and parents: AI systems can sound fluent and authoritative even when they present errors or outdated facts. Pupils may struggle to recognise the difference, especially when tools offer instant answers without clear citations or context. OUP’s warning reflects that tension and places AI literacy alongside reading, writing, and numeracy as a core skill.
Classrooms face rapid growth in AI use
Teachers report a sharp rise in student use of generative AI for drafts, summaries, translations, and practice questions. Pupils turn to chat-based tools for quick explanations and to plan essays or solve maths problems. The speed and fluency of these systems make them attractive, yet they also raise new questions about accuracy, bias, and academic honesty. Schools must help students harness useful features while avoiding shortcuts that erode learning.
The wider online environment compounds the challenge. AI-made images and audio clips can spread quickly on social media, blurring the line between fact and fiction. Students who already face information overload now confront content that appears authentic even when it is synthetic. The OUP survey suggests many pupils recognise the risk, but still lack the toolkit to evaluate and verify AI outputs with confidence.
Publishers and schools consider the next steps
OUP’s message underscores the need for practical support in classrooms. The publisher calls on schools to give pupils the tools to judge AI content, not to block it outright. That stance echoes a growing view in education: students should learn how AI works, where it fails, and how to test claims against trusted sources. Teachers can lead that shift by modelling good practice and by guiding pupils through structured checks when they use AI in research or revision.
Schools and publishers see a shared role in this work. Educators shape classroom routines and assessment design, while publishers produce resources that explain AI concepts in clear language. Together, they can help pupils ask sharper questions of information: Who created this? Which sources support it? What evidence challenges it? By building these habits early, schools can turn AI from a confusion risk into a learning aid.
Building critical thinking into everyday lessons
Strong AI literacy rests on critical thinking. Pupils need to trace claims back to original sources, compare answers across different outlets, and look for missing context. Teachers can weave these habits into daily lessons. For example, pupils can analyse two AI responses to the same prompt and evaluate which one uses better evidence and clearer reasoning. Such exercises train students to probe, not just accept, fluent text.
Clear explanations of AI’s limits also matter. Generative systems draw on patterns in data; they do not understand truth or intent. They can produce confident errors, repeat bias from training material, or fabricate citations. Students who grasp these limits will treat outputs as drafts to refine, not as final answers. That mindset helps pupils slow down, check facts, and pair AI with subject knowledge and human judgement.
Policies that protect learning and assessment integrity
Schools benefit from simple, consistent rules for AI use. Clear classroom policies can require pupils to note when and how they use AI, especially in homework and coursework. Declarations encourage transparency and give teachers a starting point for feedback. Staff training can support these policies so teachers can explain acceptable uses, spot red flags, and guide students to credible sources and verification tools.
Assessment design also plays a role. Teachers can set tasks that reward reasoning steps, personal reflection, and source analysis—elements that AI struggles to replicate without prompting. Many exam boards now address AI in their malpractice guidance, and schools continue to adapt their approach to ensure fair assessment. These measures aim to preserve academic integrity while recognising that AI will remain part of pupils’ study routines.
Parents and carers as partners in AI literacy
Families influence how pupils use technology at home. Simple steps—such as asking children to show their sources, explain their reasoning, or compare answers from two tools—can reinforce classroom habits. Parents do not need expert knowledge to help; they can focus on curiosity and verification. When families and schools align expectations, pupils receive consistent messages about responsible AI use.
Community discussions can also reduce anxiety. Many adults share pupils’ questions about how AI works and when to trust it. School-led workshops or guides can demystify key concepts, explain common pitfalls, and point to reputable resources. Open conversations help pupils feel supported as they learn to navigate information that now includes machine-written and human-written content side by side.
Wrap-up
OUP’s survey of 2,000 UK pupils offers a clear signal: with only 47 per cent of students confident in judging trustworthy AI-generated information, schools and families must step up AI literacy. The finding does not call for bans; it calls for guidance. Pupils need practical skills to evaluate claims, check sources, and understand how AI tools produce text and images. Teachers can embed these habits into lessons, while schools can write simple rules that support transparency and fair assessment. Publishers and education leaders can provide resources that explain AI in plain language. As AI becomes part of everyday study, the system must help pupils turn fluent machine output into learning they can trust. That shift will shape how the next generation reads, writes, and thinks in a digital age.