What is the value of zero-cost research? Why we shouldn’t give up the fight for intelligence, the AI summit shows that governments don't really care about regulation, and Qwen dethrones DeepSeek
Thought of the week: What is the value of zero-cost research?
I spent much of the weekend playing with deep research, a chat-based agent developed by OpenAI that conducts multi-step research on the internet for complex tasks. Historically, large language models such as ChatGPT have acted like probability machines, gobbling up billions of pieces of human-written text and regurgitating new texts that sound as if they were written by a human. In short, not thinking but guessing the right answer based on probability. The new paradigm of 'test-time’ thinking or inference forces the language model to fact-check itself before answering a question. For example, if you ask the AI how much it would cost to fly you and your family to the moon and back, the model would break the query down into multiple related questions, show you its thought process and give you a reasoned answer.
OpenAI’s new deep research ‘agent’ is the next iteration of these ‘thinking’ models. In our example, the AI researcher would not just give you the answer but present you with a fully researched, PhD-level thesis on how to do it, complete with accurate citations and references. In my tests, the result was startlingly convincing: a polished document indistinguishable from one produced by human experts. In short, this version 0.1 of a research agent is already capable of outperforming an academic researcher in 100th of the time and for pennies. This begs an obvious question. How do we value research that can be delivered almost instantly and for pennies? If anyone can produce McKinsey-level reports at the click of a button, why bother hiring McKinsey?
The answer will initially be verification and ultimately indemnification. Though, before the end of 2025, the cost of ‘intelligence’ will have fallen to zero, companies and organisations will still want someone to check their AI agents (or their suppliers’ AI agents) haven’t screwed up the work. The value will shift to both ends of the spectrum i.e. paying for someone to ask the right questions at the beginning to get the right outcome and with the expertise to sense-check (indemnify) the AI agent’s output, with the real cost being getting real value out of the research. For example, an organisation looking to assess a new business opportunity would ask its AI business consultant agent to do the quick and dirty ‘base’ research, conduct the interviews and come up with an implementation plan, and then pay for human consultants to deal with the very human problem of managing the change.
With zero-cost intelligence available to all, the value for professional service providers will be in verifying AI output and ultimately helping clients maximise its human value.
Why we shouldn’t give up the fight for intelligence
If you want to understand the future then a good place to start are the periodic Sam Altman videos. The most recent one was a recording of a pitstop he made at the University of Tokyo where he spoke about how humanity should be preparing for an AI-driven future. In the video he is very clear that AI is already way smarter than us and that you’re not going to outsmart something that processes information millions of times faster than your brain ever could. Instead of resisting or competing, Altman argues that we should all lean into AI and welcome with open arms our new and omnipotent AI co-worker, therapist and maybe even life coach. Need creative ideas? Outsource it. Want resilience strategies? Ask ChatGPT. Trying to learn new skills? Essentially, let algorithms do the heavy lifting.
But what Altman frames as empowerment feels more like surrender. It’s as if there’s a collective sigh of relief that, soon, we won’t have to bother with all that pesky mental heavy lifting. Daniel Kahneman, in Thinking, Fast and Slow, famously categorised this as System 1 and System 2 thinking – System 1 being fast, intuitive and easy, while System 2 requires slow, deliberate effort. Given the choice, humans default to System 1 whenever possible. AI is accelerating this tendency, making it easier than ever to outsource deep thinking altogether. Which means humanity is on the cusp of acting like my wonderfully submissive little dog: rolling over and letting AI – and Big Tech – tickle its belly.
To me, this explains the deafening silence (and sad truth) around AI encroaching on human intelligence. People just don’t seem to care. As we clearly can’t rely on governments (which are hell-bent on AI domination) to regulate AI usage, it’s up to us all to rage against the machine and ensure that we don’t let our System 2 thinking go to the dogs. The alternative is Altman’s world, in which the calculator always wins – and humans are reduced to button-pushers, clinging to relevance by letting the machine do all the work.
Why the Paris AI summit shows that no one really cares about AI regulation
The AI Action Summit in Paris this week was a masterclass in hypocrisy and self-interest, exposing the hollow rhetoric surrounding ethical AI. While 61 countries signed a feel-good declaration pledging “open,” “inclusive” and “ethical” AI development, the US and UK predictably sat it out, proving once again that lofty ideals are no match for cold, hard ambition. The UK balked at the idea of "global governance", fearing it might cramp its style as a so-called champion of ethical innovation – a laughable claim given its track record. Meanwhile, the US rejected the agreement altogether, with US Vice President JD Vance dismissing it as overly cautious, warning against letting American AI fall prey to "authoritarian censorship". Translation: we won’t let pesky regulations get in the way of our tech monopolies or geopolitical dominance.
This cynical posturing is par for the course. Governments and corporations alike are paying lip service to ethical AI while racing ahead unchecked, treating regulation like an obstacle rather than a necessity. Public concern? Merely performative outrage until something truly catastrophic happens – by which point, it’ll be too late. The truth is, no one actually cares about regulating AI because doing so would mean sacrificing power, profit or progress. And in this game, ethics are just window dressing.
Qwen 2.5 is the new DeepSeek
This week’s new AI kid on the block is Qwen 2.5. Developed by Alibaba (China’s answer to Amazon), it’s a Swiss army knife of AI capabilities, but what sets it apart – at least for now – is that it not only handles deep research but also generates in-thread video and audio. I generated the above clip in seconds and the realism is scary. Critically, Qwen is free so you don’t need to shell out $200 a month for Sora or deep research. Even better, you can download Qwen for free. For this level of generosity, we have DeepSeek to thank.
DeepSeek’s arrival marked a turning point in the open-source AI landscape, signalling a seismic shift toward powerful, freely available AI models. These Chinese models are already 'good enough' today, and within a year, they’ll likely be 10 times more powerful, eroding the once-impenetrable moat of closed-source AI. Proprietary models are no longer the only game in town, and OpenAI has taken notice, suddenly rebranding itself as an open-source champion.
This shift is a win for everyone. AI should be a global public good, with corporations competing on the value they provide through deployment rather than hoarding access to the models themselves. I have no issue with the best models being locked behind paywalls, as long as the good-enough ones remain open and accessible to all.
AI video of the week
It’s always good to keep an eye on the pace of development of AI video generators. The above 12-minute short was made by a single creative using Google’s Veo 2 and stitching together image-to-video sequences. We are clearly on the cusp of entire movies being made with AI in weeks rather than years.
Clearly, AI-driven production will dramatically lower costs, making high-quality film-making accessible to independent creators, not just Hollywood studios. Storyboarding, CGI and even live-action sequences could be synthesised at scale, blurring the line between traditional and AI-generated cinema. 2025 is shaping up to be the beginning of the AI-powered film-making era.
What we’re reading this week
Altman announces the roadmap for ChatGPT-5, due in a few months.
What Elon Musk’s $100 billion bid for OpenAI is really about: muddying the waters.
Le Chat – French company Mistral releases a revamped AI assistant platform with 10 times response speed and new iOS and Android apps.
Australian courts are implementing new limits on lawyers' use of AI due to concerns over inaccuracies in court documents.
Researchers replicate OpenAI's deep research within 24 hours.
Tools we’re playing with this week
Operator – Still marvelling at OpenAI’s first usable browser agent. An idiot-friendly intro to the power of agents. Best to wait for the inevitable free roll-out but worth the $200 to see a very basic glimpse into our collective agentic future. Best use case at the moment is filling out forms.
Riffusion – AI keeps coming for creatives. Riffusion is the new riff on text-to-music. Unlike its forebears Udio and Suno, it’s totally free.
Replit – Free mobile-based text-app that lets you create a mobile app in five minutes on your phone.
That's all for this week. Subscribe for the latest innovations and developments with AI.
So you don't miss a thing, follow our Instagram and X pages for more creative content and insights into our work and what we do.