The new gospel according to Silicon Valley, what happens when AI replaces teachers, how to steal the 10 habits of AI super users and what would happen if your ChatGPT history was made public?
The new gospel according to Silicon Valley
The great and the (not so) good gathered in Aspen to discuss AI and employment, and there was a profound shift in the discourse on AI’s impact on jobs. As covered in the Wall Street Journal, CEOs are becoming increasingly vocal about predicting mass white-collar job displacement. Ford's CEO stated bluntly that:
"Artificial intelligence will literally replace half of all white-collar workers in the US."
Though offering no timeline or evidence, he cited polling showing one-third of Americans now favour trade school over university. Other executives echoed his apocalyptic predictions, announcing AI-driven redundancy plans. Amazon's Andy Jassy discussed replacing workers with AI, while Salesforce's Marc Benioff claimed 30-50% of his company's work is already AI-driven. The World Economic Forum also predicted that 40% of current employee skills could become obsolete within five years, identifying office workers, accountants, and cashiers as particularly vulnerable to AI replacement.
What strikes me about this discourse is how it's becoming self-fulfilling. Regardless of AI's actual capabilities, the constant drumbeat of replacement predictions is already reshaping career choices. Young people are making life-altering decisions based on speculation rather than evidence. Every CEO who announces AI will replace workers contributes to a cultural shift that makes such replacement seem inevitable rather than optional.
I'm naturally sceptical of these doomsaying pronouncements, but something more significant is happening. Even if AI remains only vaguely useful, it's the perception that AI can replace people/is smarter than us that is way more damaging than the current reality. This matters because perception shapes behaviour. Why pursue English literature, accounting, or graphic design if you believe AI will eliminate these fields? University enrolment for these subjects will likely decline as students see the writing on the wall. Simultaneously, any type of job requiring a physical presence and human connection suddenly appears more secure. We can expect to see increased interest in manual skills and care-based professions such as connecting data centres, constructing factories or caring for elderly people.
The Aspen consensus reveals something troubling about how technological change is being managed. Rather than thoughtful integration preserving decent human jobs whilst capturing efficiency gains, we're witnessing a rush towards replacement that treats people as expendable. The celebration of job elimination suggests a profound disconnect between decision-makers and those bearing consequences.
Big Tech's narrative of inevitable human labour displacement is reshaping society, normalising a corporate choice as natural evolution rather than a direct consequence of their actions.
The risks at the heart of AI in education
Some of the Strategic Agenda team were in Botswana this week filming a UN project to connect every school in the world to the Internet. Sam, our Head of Film, was conducting interviews with local teaching staff when something unexpected emerged from the conversations. When asked about the benefits they foresaw from internet connectivity, the overwhelming response wasn't access to online libraries or educational videos—it was ChatGPT.
The reasoning was startlingly practical. These schools couldn't afford textbooks, but ChatGPT could suddenly help teachers create lesson plans and educational materials for nothing more than the cost of mobile data. It was a perfect example of how AI can extend what economists call the possibility frontier, expanding what becomes achievable with limited resources.
As someone obsessed with learning, I find this exciting. We take for granted that we can access the world's knowledge from our sitting rooms or pop into a local library. That's a luxury unavailable to most of the world, and that's before we even consider global literacy issues. AI democratises access to information in ways that traditional infrastructure simply can't match for speed and cost.
But we'd be naïve to think technology alone will solve structural problems. Knowledge isn't power without the means to access and wield actual power. A brilliant lesson plan created with AI's help still requires a qualified teacher, adequate facilities, and students who aren't too hungry to concentrate. AI provides possibility, not opportunity, there's a crucial difference.
More troubling are the risks around content origin and quality. This same week, Elon Musk's Grok chatbot faced heavy criticism for praising Hitler, highlighting how AI systems can amplify dangerous perspectives. We increasingly live in a world where people lazily ingest content of suspect origin, particularly in education. Young people now get their algorithmically-fed news from social media echo chambers rather than reading newspapers with higher journalistic standards.
The Botswana example crystallises a broader challenge. On one hand, AI can provide educational materials to schools that have never had access to basic textbooks. On the other, those same schools may lack the resources or framework to help students critically evaluate AI-generated content. It's a classic example of technology solving one problem whilst potentially creating another.
This matters because educational content isn't culturally neutral. What ChatGPT considers appropriate curriculum material reflects the biases and perspectives of its training data, which skews heavily Western and English-language. A teacher in rural Botswana using AI to create lessons about history, literature, or social studies may inadvertently import cultural assumptions that don't serve their students well.
The solution isn't to reject AI in education as that horse has clearly already bolted, and frankly, the benefits are too significant to ignore. Instead, governments need to ensure that AI-generated educational content undergoes some form of cultural and factual review. More importantly, critical evaluation of AI-generated content needs to become a core part of national curricula, not an afterthought.
The teachers in Botswana getting excited about ChatGPT need training not just in how to use it, but in how to question it. The challenge for policymakers is managing this transition thoughtfully. AI in education isn't going away, it's too useful and too accessible. The question is whether we can build the critical thinking infrastructure necessary to harness its benefits whilst avoiding its pitfalls. Based on what I heard from Botswana, that work needs to start now.
What if your AI chat history was made public
I’m old enough to know that nothing I type or say into a computer is ever private. But there’s something disarming about being able to just talk to a bot and this is the dangerous path that many AI users are treading without realising the huge implications for their privacy. From using ChatGPT for relationship advice to asking it to help you cheat your way through some bureaucratic process, billions of us are now naively relying on the goodwill/paper promises of Big Tech not to spill the beans on how lazy, selfish and mentally unhinged we can be.
Many trust LLMs like ChatGPT with their most intimate confessions because OpenAI promised them that the company would permanently delete their data upon request. However, last week, in a Manhattan courtroom, a federal judge ruled that OpenAI must preserve nearly every exchange its users have ever had with ChatGPT — even conversations the users had deleted.
The upshot? Those chats you thought were private might now be preserved, possibly for legal scrutiny.
This ruling hits hard for anyone who trusted OpenAI’s deletion policy. Whether it was a sensitive health question or a late-night ramble about personal struggles, users expected those words to stay private or be erased. Now, there’s a chance they could resurface, which raises big questions about digital privacy. Legal experts say this case could push tech companies to hold onto user data indefinitely, even if it shakes the confidence of the people using these tools.
OpenAI has yet to share details on how they’ll respond to the ruling or protect user data going forward, but this situation is a wake-up call. It’s a reminder that what we share online, even with something as seemingly private as an AI chatbot, might not be as secure as we assume.
As this legal case unfolds, it’s sparking a broader conversation about trust, transparency, and what privacy really means in the age of AI. For now, it might be worth pausing before hitting send on that next deeply personal chat.
10 habits of the top 1% of AI users
They talk rather than type - I'm a big fan of the ChatGPT app. If you haven't downloaded the updated app you're missing out. I guarantee you'll be shocked at the naturalness of your AI counterpart - they've nailed the nuances of human speech, pauses, rhythm and tone. The personas are really impressive too. You can use this for language learning, even therapy practice and sales rehearsals. Get it to act as a psychodynamic therapist, pitch practice partner, or reading buddy to help you really embed what you learn.
Triangulate responses - Triangulation is a fancy term for using multiple models to arrive at the best outcome. Ask one model something, get another to critique it, then feed both responses into a third. It's like having a panel of experts rather than just one opinion.
Track your AI spend and be ruthless about subscriptions - It's amazing how quickly you can rack up costs. £20 may not seem like much but you can soon find yourself spending over £500 a month, and still signing up for gimmicky apps that are just a slick wrapper on top of OpenAI - basically a complex prompt wrapped around a mainstream LLM.
Always edit your AI output - Never just copy and paste what AI gives you. It's a starting point, not the finish line. The best users treat it like a first draft that needs human polish.
Focus on context engineering - If you're still obsessing about prompt engineering, that's so yesterday. Context engineering is where the smart money is - feeding the AI the right background information and setting up the conversation properly.
Use the right model for the job - Different AIs excel at different things. For example, Claude is best for nuanced writing, GPT-o3 for complex reasoning, Perplexity labs for research. Don’t get lazy and only reach for the last tool you used, or sacré bleu the same chat thread.
Apply the three-times rule - If you do something three times in a day, ask yourself if you can automate this with AI. This mindset shift helps you spot genuine opportunities rather than forcing AI into places it doesn't belong.
Use ChatGPT projects, Gemini Gems and Claude Artifacts properly - It's tempting to have one long chat, but it's cleaner and better to use the projects feature. It provides static context and keeps things organised instead of letting conversations drift all over the place.
Train AI on your writing style - This approach is particularly effective with Claude. If you're short on time and can't upload extensive context for more effective AI queries, you can provide examples of your writing. This helps the AI match your tone, saving you significant editing time that would otherwise be spent transforming corporate-sounding text into your personal voice. For a helpful article on applying this to ChatGPT, see here.
Get expert advice on tap - You don't need to read entire books when AI has access to all that person's or company’s processes and thoughts. An LLM can be your expert adviser on anything from McKinsey-style strategy to therapeutic approaches. You just need to include this in your prompt. e.g. “you are a Mckinsey consultant tasked with.
Using the right (AI) tool for the job
In line with helping you leverage AI better, I thought it was timely to give you my list of what AI to use.
ChatGPT - The original AI "new kid on the block" has matured into the AI world's "comfy pair of slippers,” often the lazy, yet good, choice. Even if you don’t bother with (which you should) switching models or upgrading to the pro version, the free ChatGPT 4o model is a versatile generalist, excelling at image generation, data analysis, and general research. Its DALL-E integration makes it particularly useful for visual content creation, whilst its browsing capabilities handle broader research questions effectively. Multimodal understanding and web access combine to create a genuinely useful general-purpose tool.
Claude - Quietly Claude has established itself as the superior choice for writing and coding tasks. Its nuanced understanding of tone and context produces more natural prose than competitors. Coding capabilities are intuitive, like collaborating with someone who understands what you're building. The new artifacts feature is valuable for iterative development.
Gemini - Distinguishable through debugging large codebases or analysing extensive documentation due to its extraordinary context length, allowing it to process millions of words. Ideal for understanding vast and interconnected information.
Grok - Assuming you don’t mind Elon Musk’s mucky pawprints all over your content, Grok brings a distinctly creative approach to writing tasks. It’s quick as well, so if you’re in a rush it will give you the fastest answer. Grok 4 has just been released, which apparently is a beast, but I can’t stomach supporting the ‘cause’ to the tune of $200/month. In short, if you need AI output that doesn't feel like it's been through corporate sensitivity training, Grok is your (AI) man.
Perplexity - This functions as a search engine that is focused on research, complete with citations. It synthesizes information from diverse sources transparently, replacing traditional search for complex queries. It's what Google might have become if it prioritised understanding over advertising. Its "Labs" feature, available to Pro users, utilises an AI agent that gathers data from various LLMs to compile comprehensive reports for online applications.
Gamma - It automates presentation creation with sophistication, understanding flow and visual hierarchy to produce professional results from simple prompts. It makes you wonder why you ever spent hours fighting with PowerPoint.
Bolt.new, Lovable.dev, and v0.dev represent a significant advancement in web development tools. I've grouped these together because they're all excellent for creating prototypes or even simple fully functioning applications. Unlike static code generators, these create full web applications with live previews, package management, and deployment capabilities, complete conversational development environments. This approach offers a glimpse into the future of development.
What we’re reading this week
A new study from Stanford highlights the risks of using AI therapists.
The first human trial of a fully AI-designed drug. Isomorphic Labs is preparing its first human trials for AI-designed cancer drugs that aim to "solve all diseases."
A 45-year-old Atari 2600 chess program just beat Microsoft Copilot and ChatGPT-4o. The vintage AI with 128 bytes of RAM outplayed billion-dollar modern AI.
Going on a trip? Let AI be your psychedelic guide. How AI apps like "TripSitAI" are looking to supplant humans.
Goodbye model-switching, hello one AI model to rule them all. OpenAI confirms that its long awaited GPT-5 will merge all models into one this summer, ending the current "alphabet soup."
Tools we’re playing with
Perplexity Labs. Testing the claim that it can accomplish in 10 minutes what would previously have taken days of work, tedious research and coordination of many different skills.
HeyGen. It’s work experience week for many post-GCSE kids in the UK so I’m setting my son the task of creating an automated AI workflow using HeyGen’s avatar production process.
Flow – Veo 3’s video generation tool. Impressive on the surface but still tricky to stitch together something coherent.
That's all for now. Subscribe for the latest innovations and developments with AI.
So you don't miss a thing, follow our Instagram and X pages for more creative content and insights into our work and what we do.