The Progressive Post
From attention to attachment

Artificial intelligence is entering children’s lives faster than we can understand its consequences. In just a few years, it has moved from novelty to infrastructure. It comes with immense risks, undermining not only learning and critical thinking, but the very core of being human: the ability to connect with others.
AI systems are rapidly becoming embedded in everyday life: writing emails, generating code, assisting with homework and personalising information. Advocates highlight enormous potential: cheaper education, individualised learning and increased productivity.
But when it comes to children and adolescents, the picture is far more complex. We are facing a familiar dilemma. On one side is the pressure to adopt early and ensure that children are ‘AI-literate’ and not left behind. On the other is a fundamental lack of knowledge about how these technologies affect developing minds.
We have seen this dynamic before. During the rise of the attention economy, digital technologies were rapidly introduced into children’s lives, justified by similar arguments about necessity and competitiveness. Countries such as Denmark and Sweden integrated digital tools into education systems without robust evidence of its benefits. Fifteen years later, research suggests that this transformation was not without costs. Increased screen exposure has been associated with declines in literacy and academic results, as well as changes in attention, emotional regulation and social functioning.
Now, with AI, we are about to repeat the same pattern, and the effects can be even more detrimental. AI systems are interactive, adaptive and increasingly designed to simulate human relationships. This marks a shift from an attention economy, where platforms compete for time, to an attachment economy, where technologies compete for emotional connection.
When thinking is outsourced
Industry-funded studies often portray AI as a powerful educational tool that can save teachers time and accelerate learning. AI can tailor explanations to a child’s interests, for example, explaining physics through basketball analogies. In low-resource contexts, early studies suggest that AI-assisted tutoring can accelerate foreign language acquisition. These findings are promising, but they must be interpreted cautiously as they are funded by the companies that sell them.
Independent research paints a more nuanced picture. AI can enhance performance, but it may not enhance learning. One study examined how students perform when writing essays under different conditions: using AI, using search engines, or writing by themselves. When students used AI after first engaging deeply with the material, outcomes improved. However, when AI replaced the initial cognitive effort, learning deteriorated. Participants who relied heavily on AI struggled to recall or reproduce their own work when the tool was removed.
This reflects a well-established principle in cognitive science: learning depends on mental effort. It is through actively thinking, struggling and working with information that we store it in memory and make sense of it. When that effort is bypassed, both memory and understanding weaken.
AI systems facilitate what is known as ‘cognitive offloading’: the delegation of thinking processes to external tools. While this can improve short-term efficiency, it reduces the mental effort required for deep learning. Over time, this can weaken memory, critical thinking and problem-solving abilities. Especially for children and adolescents, who need to first develop these cognitive functions, AI can be detrimental, preventing them from building the basic skills required for learning and independent thinking.
The erosion of human skills
The effects of AI are not limited to cognition. Emerging evidence suggests that high reliance on AI tools may also affect social functioning. Children who frequently interact with AI systems at school show reduced ability to recognise and interpret others’ emotional expressions, alongside weaker social skills. This aligns with broader developmental theory: social cognition develops in children through real-world interaction, where they must learn and rehearse complex, often ambiguous human signals. Many AI systems, on the other hand, are intentionally built to be agreeable, supportive and non-confrontational. Unlike humans, they constantly validate the user, adapt to preferences, and provide immediate emotional feedback. They are frictionless, and for young people, this can make human relationships – which are complex, effortful and sometimes uncomfortable – much less appealing.
This design fosters attachment, and the emergence of AI companions intensifies this dynamic. Recent data suggest that 70 per cent of adolescents have used chatbots, and that 30 per cent already prefer discussing personal issues with a chatbot rather than a human. Some young users form strong emotional bonds with chatbots, using them to rehearse social interactions or seek validation. While this can feel supportive in the short term, it replaces real interactions with frictionless, predictable exchanges that do not require perspective-taking, negotiation or emotional effort. Over time, this can contribute to social withdrawal and increasing reliance on artificial relationships rather than real-world connections. As a result, children may miss critical opportunities to develop basic social skills and learn how to form and maintain reciprocal relationships.
Dependency and risk
A cross-national survey shows that interacting with a chatbot can provide a sense of emotional support and reduce loneliness in the short term. However, this support also increases the risk of dependency, as users form emotional bonds with the chatbot, bonds that strengthen with more frequent and intense use and are observed even among individuals who have rich social lives. This suggests that not only lonely individuals are at risk of dependency.
These risks are even more pronounced in children and youth, who, due to their emotional immaturity, are more likely than adults to trust AI, disclose personal information and blur the distinction between artificial and human agents. Cases where adolescents have formed strong emotional bonds with chatbots that reinforce harmful thoughts and have even led to suicide, are already emerging.
Chatbots have, in some cases, provided harmful advice, including discouraging users from seeking help or engaging in dangerous behaviours. Eating disorders, self-harm and social withdrawal can all be amplified in environments that mirror and reinforce existing beliefs.
These failures reflect a fundamental problem: the primary objective of many AI systems is not the user’s safety, but to maximise engagement for the company.
Policy is lagging behind
Despite these concerns, AI is being integrated rapidly and uncritically into children’s environments, often without public debate. In education, AI tools are already embedded in widely used devices and platforms, such Google’s services. In private life, access is largely unrestricted.
Some governments have begun to act. China, for example, introduced regulations in 2026 restricting AI systems, especially ‘AI-companions’ for minors that simulate intimate or emotionally dependent relationships. Regulations also require clear disclosure that users are interacting with artificial systems.
A dystopian future
AI will undoubtedly shape the future. But our future also depends on a young, healthy generation capable of critical and independent thinking, good mental health and genuine human collaboration. These capacities are built in childhood through effortful learning, social interaction and meaningful relationships. If these foundations weaken, the costs will be immense, not only at the individual level, but also economically and societally through a weakened workforce.
Poorly regulated AI risks accelerating exactly this outcome. By making thinking optional and relationships frictionless, it may prevent children and adolescents from developing the very skills they need to function as adults in society. The challenge, therefore, is timing and protection. Children must have space to develop the cognitive and social capacities that make them human before being introduced to AI. These skills will allow them to use AI as a tool and not be shaped by it.
Photo credits: Shutterstock/Frame Stock Footage