How to encourage your staff to see AI as an ally rather than as a threat, the importance of soul when presenting, the impact of gamification on work and how to pick the right Chatgpt model
How to get your staff to see AI as an ally rather than a threat
Yesterday I had the talk with my design team – the one about AI. They'd initially pushed back against exploring AI tools, seeing them as gimmicks or threats. But as we talked, it became clear they were thinking about AI in the wrong way.
I asked the team to look carefully at their workflow, challenging them to pinpoint tasks that weren’t truly creative – those things that were tedious, time-consuming or troublesome (my 3T framework!). Any job ticking these boxes is likely a prime candidate for AI automation.
As the UN is our biggest client, much (but increasingly less) of our work revolves around designing flagship reports on some of the world’s most pressing problems. For our designers, one annoying 3T task is taking Excel-generated charts and meticulously turning them into polished – but largely identical – illustrations in Adobe Illustrator. When you’re dealing with something like a carbon emissions report featuring 200 nearly identical graphs, it becomes an exhausting and morale-draining process.
So, having identified this process as an issue, the next step is to find an AI-powered solution that automates this repetitive work. Not only will this likely improve quality and reduce turnaround times, more importantly, it gets staff thinking about how AI can free them up for more creative tasks rather than just replace them.
This conversation sparked a broader realisation: one of the best ways to secure your position in a rapidly changing workplace is to become the go-to person for exploring and implementing AI solutions. AI isn’t necessarily a magical cure-all, but given how rapidly its capabilities evolve, adopting an AI-first mindset is quickly becoming essential.
The importance of soul when presenting
I’ve been invited to speak about AI at an upcoming event, the Commonwealth Sustainable Energy Forum.
The audience will be a mix of policymakers from across one of the world’s most diverse political bodies, representing countries as varied as Canada and Tuvalu, that are collectively responsible for over $15 trillion in global trade. That diversity makes the task more complex: how do you present on AI when you don’t know your audience’s level of technical competence or even their access to AI infrastructure?
Naturally, I asked AI to help me figure it out.
Using ChatGPT, I fed in some ideas and it pulled together a crisp outline. Bullet points, speaker notes, supporting images, even a suggested style based on TED Talks. Slick. Polished. Efficient. I then plugged this text into Gamma, an incredible AI presentation maker, and after a couple of days of tweaks, I had a shiny deck and thought, “This is it!”
Gamma even managed to create an animated version of Sam Altman talking on stage.
Then I delivered a dry run to my team. Silence, and not the good kind. The presentation was technically fine but ultimately soulless. It didn’t land because it wasn’t made for people – it was made to please me. AI had done what it’s designed to do: mimic existing styles, answer my prompt and optimise for form. But it missed the things that actually make a talk connect: emotional truth, lived experience, human tone.
So I scrapped it. Rewrote the entire thing in a few furious hours. Made it rougher, more personal, more real. I cut most of the AI-generated visuals. I wrote like I was talking to someone I respected, not trying to win a design prize.
My key takeaway is that:
AI can generate. But it can’t empathise. It can summarise. But it can’t prioritise. And most of all—it can’t connect. That’s your job.
If you're using AI in comms or policy, remember: AI can draft, polish, and accelerate. But the final 10%—the part that persuades and moves people—is still deeply, stubbornly and wonderfully human.
Will AI lead to the gamification of life?
I read two very different but equally fascinating articles this week about the growing gamification of human activities. The first was a stark report in Politico about how the Ukrainian army is using gamification techniques to motivate soldiers. Soldiers earn points for uploading verified drone footage of enemy kills, which they can exchange for critical equipment such as drones. The Army of Drones programme assigns varying points for different targets – up to 50 points for destroying a mobile rocket system and six points per enemy soldier. The Ukrainian Government is continually refining the system to enhance its effectiveness; a grim reminder of how warfare is increasingly reliant on digitised incentives and weaponised AI.
The second article, published by The Financial Times, explored a peculiar Japanese company called Disco that operates entirely via an internal free market. Tasks not directly related to production, such as translating product brochures or testing new machinery, are listed in an app, and employees bid on these tasks using an internal currency called 'Will'. Employees can exchange 'Will' for additional salary. Conversely, they can also lose 'Will' if they make mistakes: misspelling a production label or wasting management time by calling unnecessary meetings results in financial penalties.
While these scenarios differ drastically, both represent a dystopian attempt to reduce human actions to quantifiable data points, making them easier to measure and, ultimately, manipulate. This is where I see AI increasingly playing a central role. By meticulously tracking human performance, organisations will aim to weed out poor performers or replace human tasks with automated processes. The infamous mantra of McKinsey, the world's best-known management consultancy, that "What you can measure, you can manage" will become the standard mindset.
This philosophy underpins the accelerating integration of AI within corporate structures. Recommendation engines, route planners and scripted call centres all now strive to remove what Silicon Valley calls ‘friction’, i.e. the messy, slow and emotional aspects inherently tied to human decision-making. Things like pausing to consider context, asking questions, getting tired, and making decisions based on gut rather than data all represent costly inefficiencies.
Consultants pitch AI-driven processes as purely about efficiency, yet their deeper goal is control. Metrics frame every task as a cost function to be minimised. Nuance, empathetic care and creative questioning resist easy measurement, so they vanish from reward systems. Over time, workers begin performing to the metrics rather than shared purpose or craft. The intention is clear: employees reduced to numerical values become predictable, and predictability paves the way for automation.
The risk to companies that go down this slippery slope is the human cost of high staff turnover rates and low morale. Companies that view their workforce solely through an optimisation lens risk losing the very qualities (judgement, adaptability, creativity) that become crucial when everything is AI-first. Friction in the form of conversations, debates and slow apprenticeships is not a waste. It is the wellspring of insights that AI training datasets can’t predict. Treat friction as expendable, and decision makers will find themselves surrounded by perfectly obedient systems delivering yesterday’s answers at ever-greater speeds.
Which version of ChatGPT should you be using?
For such a brilliant company, OpenAI is absolutely rubbish at naming its models. If you use the ChatGPT app, you can now pick from seven AI models with such unhelpful names as 03 mini and 01 pro mode. So I’m going to do you a massive favour and explain which model you should use for what.
OpenAI just published some guidance but, weirdly, I disagree with much of what they recommend.
Want to write human-like, well-written text? GPT 4.5 all the way.
Want to get the model do some deep research or give your brain a rest? Use the latest model 03.
Want a quick and dirty response or an image? Use the all-purpose 04 model and select “Create image”.
What we’re reading this week
OpenAI launches a new initiative to support (read: sell more services to) countries around the world that want to “build on democratic AI rails”.
Saudi Arabia launched a new AI company to drive the country’s strategy and investments as it seeks to become a global AI hub. Without any sense of irony, it named the company Humain…
Google DeepMind launches AlphaEvolve, which it calls an evolutionary coding agent that can create its own algorithms (instructions). It works by creating massive volumes of coding solutions and then using an ‘evolutionary’ process – like survival of the fittest for programs – to create the best solution.
The Middle East continues its push to replace petrodollars with AI dollars. The United Arab Emirates announced plans to introduce AI classes for children as young as four.
Duolingo used generative AI to launch 148 new language courses in less than a year, more than doubling its total offerings, fast.
According to the latest Morgan Stanley report, the global humanoid robot market could be worth $4.7 trillion annually by 2050 – double the revenue of the top 20 global car manufacturers in 2024.
Tools we’re playing with
Gamma - an incredible tool for building beautiful presentations. The one massive caveat is that you’ll probably end up with a beautiful but soulless deck, so use wisely.
Synthesia - we have a client who wants us to create AI avatars for an e-learning course, so we’re trialling this.
v0.dev - a great “no-code” builder. We’ve built a digital dashboard, and are also in the process of creating our own learning management system for a client.
That's all for now. Subscribe for the latest innovations and developments with AI.
So you don't miss a thing, follow our Instagram and X pages for more creative content and insights into our work and what we do.