Study: AI basically makes kids dumber

Artificial intelligence seems to be everywhere these days, from weird memes on Facebook to new email features. It’s also being used quite a bit by students and teachers, and new research from the Brookings Institution revealed some risks associated with that use.

“AI’s ease of use and its reinforcing outcomes (improved grades with little effort), combined with human tendencies toward shortcuts and the transactional nature of schooling (completing assignments for grades) drive cognitive offloading and dependency, atrophying students’ learning – particularly their mastery of foundational knowledge and critical thinking,” said Brookings.

To put it simply, using AI has the potential to make us well… dumber.

Audacy has reported before on how excessive use of AI can contribute to “brain rot” in adults. Brookings’ Center for Universal Education aimed to create a “global snapshot” of AI use by students with data from 50 countries. Some goals of the snapshot were to see if we are on the right track with AI use, as well as identifying risks and benefits of technology.

In the U.S. specifically, research published last October by the College Board found that AI use was more prevalent than ever in U.S. high schools, with use of generative AI tools for school work increasing from 79% to 84% between January and May of last year alone. Last June, Gallup polling also found that three in 10 teachers used AI weekly.

Going into 2026, Brookings’ research indicated that the risks associated with use of AI in education still outweighed the benefits. These risks “are primarily cognitive, emotional, and social,” and they “are qualitatively different from challenges posed by previous educational technologies,” Brookings said.

It noted that risks come not only from using AI in the classroom but from unsupervised out-of-school use of applications such as AI companion software. Audacy recently reported on a tragic example of those risks – Google entering into a settlement over a lawsuit that claimed an AI chat program contributed to a young teen’s suicide.

“AI tools prioritize speed and engagement over learning and well-being,” said Brookings. “AI generates hallucinations – confidently presented misinformation – and performs inconsistently across tasks, what researchers describe as ‘a jagged and unpredictable frontier’ of capabilities.
This unreliability makes verification both necessary and extraordinarily difficult.”

As they navigate this “jagged frontier,” young learners also often lack foundational knowledge that could help them fact-check AI, and are therefore more vulnerable to accepting AI-generated misinformation. They can also have a harder time differentiating AI’s conversational tone and emulated empathy for human interaction.

According to Brookings, “these patterns weaken learning mindsets as students develop unrealistic expectations about learning ease, lack opportunities to develop resilience and grit, and become less willing to engage in the productive struggles that lead to authentic learning.” It went on to say that the conflation of AI with humans “directly short-circuits children’s developing capacity to navigate authentic social relationships and assess trustworthiness – foundational competencies for both learning and development.”

There’s also a socioeconomic element to the AI risks, Brookings said. Since students from wealthier households often have more access to technology and less struggle entering paid tiers of AI use, they could have an advantage over peers from lower-income backgrounds.

“Long-established patterns of technology use suggest that privileged students may be more likely to employ AI productively to enhance their capabilities, while disadvantaged students risk using it substantively in ways that replace rather than augment their thinking,” Brookings explained.

Finally, Brookings warned about AI degrading our trust in each other.

“Such erosion of trust can provoke cynicism and nihilism in students, undermining the relationships upon which meaningful education depends,” it said. “Many teachers distrust the authenticity of student work, while students increasingly question whether their teachers’ materials and feedback are genuinely their own. “

Things aren’t all bad, though. Beyond the AI slop and concerning risks, there are some positives related to AI that Brookings noted. When “integrated with pedagogically sound approaches,” it can even enrich student learning and save teachers time that they can spend on more instruction time with students, and help during teacher shortages, the report found.

“It helps teachers create more objective and targeted types of assessments that reduce bias while more accurately measuring students’ knowledge, skills, and aptitudes,” Brookings said “AI can empower student learning by providing access to otherwise unavailable learning opportunities and presenting content in ways that are more engaging and accessible, particularly for students with disabilities, neurodivergent learners, and multilingual learners.”

Ultimately, it concluded that we can still “bend the arc of AI implementation toward supporting student learning and development,” according to the report.

Featured Image Photo Credit: Getty Images