To conclude our series on the impact of algorithms on our children’s lives, let’s talk about the newest kind of program impacting everybody’s day to day: excessive reliance on AI and generative models. As Artificial Intelligence becomes increasingly woven into daily life, young people are growing up with unprecedented access to tools that can generate ideas, solve problems, and even produce entire assignments in seconds. While these technologies offer remarkable possibilities, they also present new risks — especially when used as substitutes for genuine thinking, creativity, and learning.
A 2025 study by the MIT Media Lab has raised some concerns about the usage of generative Artificial Intelligence and Large Language Models (LLMs) and its impact on our ability to think critically and originally, and to learn deeply. Over the course of several months, the researchers performed EEGs (electroencephalographies) on groups of people aged 18 to 39 engaged in writing SAT-level essays using ChatGPT, Google’s search engine, and nothing at all. They reviewed 32 brain regions, and have consistently found that the ChatGPT users had the lowest brain engagement and “consistently underperformed at neural, linguistic, and behavioural levels.” They have also found that over time, the users of AI became lazier, often resorting to copy and paste by the end of the study. Other MIT studies from earlier in the year have found that reliance on ChatGPT for communication and entertainment was linked to “heightened loneliness and reduced socialisation.”
Knowing that, much like other digital algorithms, LLMs learn and adapt to their users’ expectations and preferences, it is not unreasonable to infer that the adverse effects from prolonged reliance on these programs worsen over time. This means that children and teenagers need to be taught to engage with AI in ways that do not undermine their own critical thinking and sensibilities and to limit usage to specific cases where AI might reasonably be expected to outperform a human. This does not mean making these machines off limits - rather, this means explaining the young people the reasoning behind the restrictions and rewarding efforts to think and learn for themselves without delegating these tasks to an algorithm.
At ABI School, we believe that AI should be a tool for empowerment, not a shortcut that weakens curiosity or critical thinking. Helping students understand when and how to use these systems responsibly allows them to benefit from innovation without sacrificing their own growth. By fostering habits of reflection, originality, and independent problem-solving, we ensure that young people remain in control of the technology they use — not the other way around. With thoughtful guidance, students can learn to collaborate with AI while still developing strong, confident, human judgment.
🔗 Explore more on our website: abischool.fr
🔗 Follow ABI School on LinkedIn, Facebook, and Instagram to read the full series on child digital safety and wellbeing.
To complete the action, please confirm the condition