It’s official: AI makes us stupid. What happens when we get trapped in the machine? And how Google is eating the web
It’s official: AI makes us stupid
I was listening to a highly recommended podcast recently – Esther Perel's appearance on Steven Bartlett's "The Diary Of A CEO" . During the podcast, Perel, a world-renowned relationship expert, gives absolutely dynamite advice on how to have healthy relationships, but it was her use of the term ‘social atrophy’ that got me thinking.
Bartlett was trying to make a (very weak) case for how men are being penalised by dating apps and provided as evidence the infamous case of the guy who calculated that he'd swiped 2 million times only to achieve one real-life date. As well as calling out the stupidity of the guy’s continued swiping, Perel pointed to the wider problem of what happens when we outsource the management of our social lives to algorithms: social atrophy. Our social muscles, she argued, waste away when we delegate human connection to swipe-based interfaces. In short, we lose the ability to interact in real life.
I started to wonder if we're witnessing a parallel phenomenon – what I'd call intellectual atrophy, where AI tools become more and more powerful and we use our grey matter less and less, with the result that we may soon lose the ability to apply critical thinking because those muscles have withered away from lack of use.
Evidence for my very common-sense hypothesis is now emerging, and is, to be frank, worrying.
Carnegie Mellon University and Microsoft Research found a significant negative correlation between frequent AI tool usage and critical thinking abilities. Another study published in the journal Societies revealed that people who rely heavily on AI tools performed worse on critical thinking assessments, particularly due to what researchers call ‘cognitive offloading’ – essentially, letting machines do our thinking for us.
Most concerning is MIT research showing that ChatGPT users demonstrated the lowest brain engagement when writing essays, with participants growing increasingly lazy over time, often resorting to copy-and-paste by the study's end. The scientists warned of potential ‘cognitive debt’ – the hidden cost of efficiency that erodes our capacity for deep, reflective thought.
Scientists have even coined the term ‘AI-induced cognitive atrophy’ to describe how over-reliance on AI chatbots reduces our ability to think critically and develop independent thought.
Ok. So it’s official, ChatGPT makes us stupid. The obvious way to preserve the small number of brain cells we have is to constantly force ourselves to do things from scratch rather than reaching lazily for ChatGPT to pick up the slack. By all means get AI to do low-level time-sapping tasks that are ultimately unimportant, like taking meeting notes, summarising reports and (possibly) analysing data. But to keep your old noggin in tip-top shape, you need active resistance against your own convenience-seeking tendencies.
This doesn’t mean abandoning AI experimentation, which would be both impractical and rather regressive. Rather, it's recognising that we need to be more intentional about when and how we engage with these tools. Recent studies suggest that moderate AI usage doesn't significantly affect critical thinking, but excessive reliance leads to diminishing cognitive returns.
Much like Perel's observations about social connection, we must resist the temptation to outsource our thinking entirely, preserving space for the messy, inefficient but ultimately irreplaceable process of human cognition.
Trapped in the machine
I had a lovely, very analogue week looking after my 81-year-old mother, who moved to Nigeria to care for my dementia-ridden father. Various family members took it in turns to play the roles of chef and chauffeur and on her penultimate day it was the turn of my sister-in-law to ferry mum to her myriad medical check-ups. Mid-morning, as she couldn’t get hold of her husband (my brother), she called in panic to say that she'd noticed a drivetrain error warning on the dashboard of her electric car.
Calling on my experience as a former IT help-desker at the library of the London School of Economics, I gave her the classic advice that 90% of IT issues can be fixed by switching something off and on again. “It'll be fine,” I reassured her. “If the car still moves just drop off mum, then head to the nearest garage.”
Obviously, I forgot about Murphy's law – that anything that can go wrong will go wrong and at the most inconvenient time – and two minutes after my sage advice, her car died completely in the middle of Shepherd’s Bush Green, a busy road in West London, with my mother trapped inside. She called to essentially ask me “Now what?” Feeling incredibly guilty, I jumped in the car to rush down and help.
After 15 minutes of futile pushing of buttons and prodding of screens, the car was still unresponsive. Normally, in this situation, you'd do what people have done for decades – find some helpful blokes to push the vehicle clear and call roadside assistance. But this was not an option because Mercedes, in their wisdom, had removed the manual parking brake and immobilised the car. So we were trapped inside what had become a massive, malfunctioning computer with no manual override.
The recovery services, when contacted, refused to attend because it was “too dangerous for staff health and safety.” Professional help was unavailable because the situation had become too complex for standard procedures. Long story short, after 40 minutes of not touching anything the poor car, which had obviously just needed a break, restarted and we managed to drive it very slowly to the local garage for a re-boot.
For me, this experience reflects the troubling pattern of how we deploy technology across society. We're creating systems that work brilliantly under normal conditions but leave users completely helpless when things go wrong, with system designers often failing to include simple mechanical or manual solutions when the tech fails, as it invariably does.
We're witnessing this same dynamic with AI: complex systems being rolled out across healthcare, education, criminal justice and employment without adequate understanding of their failure modes or sufficient provision for human intervention when they inevitably malfunction.
The uncomfortable truth is that we're building a world where being trapped becomes the norm rather than an unfortunate exception. There might not be helpful strangers able to push us to safety when the technology fails us, and the recovery services may be too overwhelmed by complexity to respond at all.
AI video of the week
This fully AI-generated ad, costing only $2,000 and created using Google's Veo 3 in just days, was aired during the NBA finals.
How Google is eating the web
Google released so many features at their May I/O, that I’m only now getting round to understanding the implications. A key one is the implementation of AI Mode, which means that every time you Google something, Google essentially gives you an in-depth answer. Yet, what seemed like an innocuous little change is now having a massive impact, and possibly the most significant shift in web economics since the advent of search engines themselves.
Key observations:
Google now provides AI-generated summaries instead of directing users to source websites.
Traffic that previously flowed to content creators now remains within Google's ecosystem.
Traditional SEO and advertising models face existential disruption.
Revenue streams are being redirected from publishers to Google directly.
I decided to test this myself using a query about my WHOOP fitness tracker – specifically wanting to understand the effects of late workouts on sleep recovery. Rather than taking me to WHOOP’s website, Google presented a polished summary of everything I needed to know, scraped directly from that site's content. So I got my answer without ever visiting the source, meaning WHOOP or any other authoritative website received no traffic, no potential ad revenue and no opportunity to sell me more nonsense. Google had essentially become both the question and the answer, cutting out the original content creator entirely.
Yes, Google helpfully showed me a sidebar of sources, but did I click on any of the links? Of course not. Herein lies the problem, which Google knows only too well. Humans are lazy and more than happy to be spoon-fed rather than clicking or increasingly thinking for ourselves.
This represents a fundamental restructuring of the web's economic model. For decades, the implicit bargain was simple: Google would direct users to websites in exchange for indexing rights, and everyone benefited from the resulting traffic flow. Publishers got visitors, Google got search dominance and users found what they needed.
Now Google is keeping users within its own ecosystem while still extracting value from publishers' content. The company appears to be hedging its bets by pushing Gemini subscriptions, possibly anticipating reduced advertising revenue as this model scales. But the immediate effect is that all the traditional levers of online visibility – SEO, Google ads, content marketing – have been fundamentally altered overnight.
This raises a series of critical questions:
How do content creators sustain themselves when their work is repackaged without traffic attribution?
What happens to the web's diversity when Google becomes both curator and destination?
Are we witnessing the end of the open web as an ecosystem of independent publishers?
How do smaller players compete when the platform itself becomes the primary beneficiary?
The most troubling aspect might be Google's new AI podcast feature, which can generate entire audio discussions based on search queries. This isn't just summarising existing content; it's creating entirely new formats that keep users even further from original sources.
We do indeed live in interesting times, though perhaps not in the way most of us would prefer. The web that emerged from the democratising promise of the early internet may be evolving into something far more centralised and extractive than its founders ever envisioned.
What we’re reading this week
Copyright is officially dead. At least it is if you’re an AI developer. Two landmark cases this week saw US judges throw out copyright violation cases brought against Perplexity and Meta.
Google DeepMind introduced its game-changing AlphaGenome, a new AI tool capable of predicting how DNA mutations affect thousands of molecular processes. The model was trained in just 4 hours and it can now analyse up to 1 million DNA letters and make predictions on how single variants or mutations in human DNA sequences impact a wide range of biological processes regulating genes.
Anthropic upgraded Claude with new app-building capabilities, allowing any user to create, host and share interactive AI-powered apps directly from simple text prompts via its Artifacts workspaces. Important because anyone can now build and share an app within Claude itself.
A new report from The New York Times detailed cases of ChatGPT use reinforcing and fuelling user issues such as delusions, conspiratorial beliefs and mental health crises.
Tech giants are set to disrupt the advertising industry with new AI tools, posing a significant threat to traditional agency models.
Tools we’re playing with
Claude’s newly upgraded Artifacts feature: Lets you code basic apps like this rubbish clone of the classic 80s game Defender. What’s interesting though is that you can publish your app and anyone can play it.
11.ai, ElevenLabs’ new AI agent: Like a lot of consumer-facing AI agents, it’s rubbish, for now.
Perplexity Labs: An interesting mash-up of AI-powered search and agents.
Replit.ai: A mobile-first, vibe coding platform. I’ve been wasting time trying to vibe code a Duolingo clone.
That's all for now. Subscribe for the latest innovations and developments with AI.
So you don't miss a thing, follow our Instagram and X pages for more creative content and insights into our work and what we do.