A pragmatic look at what AI can and can't do in 2025
Every morning, I fire up ChatGPT to help draft emails, ask Claude to analyse complex documents, and use Gemini to brainstorm ideas. Like millions of professionals worldwide, AI has become as routine as checking my diary or grabbing coffee. Yet despite this daily reliance, I remain what I'd call an "AI sceptic"—not because these tools aren't useful, but because the gap between reality and rhetoric continues to widen.
The Uncomfortable Truth About AI's Sweet Spot
Yesterday, I encountered a description that crystallised my thinking about AI's current state: "AI is fantastic at solving problems that we've already solved." This insight cuts through the noise of both breathless evangelism and knee-jerk dismissal to reveal something more nuanced and, ultimately, more useful.
Think about what AI excels at in your daily workflow. Writing emails that follow established patterns. Summarising documents using familiar structures. Generating code based on well-documented practices. Translating languages using decades of linguistic research. Creating images that remix existing artistic styles. Each of these tasks involves recombining known solutions in novel ways—impressive, certainly, but fundamentally different from genuine problem-solving breakthroughs.
This distinction matters because it helps explain both AI's remarkable utility and its surprising limitations. When we ask AI to help with tasks that have established frameworks and abundant training data, the results can feel magical. When we venture into territory requiring genuine reasoning about novel problems, we quickly hit walls.
The Tower of Hanoi: A Humbling Reality Check
The limitations become stark when we examine AI's performance on problems that require step-by-step logical reasoning. The Tower of Hanoi puzzle—a classic brain teaser involving moving disks between pegs according to simple rules—has become an unexpected litmus test for AI capabilities.
The problem seems deceptively simple: move a stack of disks from one peg to another, one disk at a time, never placing a larger disk on a smaller one. A reasonably bright 10-year-old can work through the logic, even if slowly. Yet current AI systems struggle beyond seven or eight moves, despite having access to vast computational resources and training data that includes countless examples of the puzzle.
This isn't merely an academic curiosity. The Tower of Hanoi represents a class of problems that require what researchers call "deep reasoning"—the ability to think several steps ahead, maintain complex mental models, and work backward from desired outcomes. These cognitive abilities are fundamental to tackling genuinely novel challenges, from scientific breakthroughs to complex strategic planning.
The fact that this limitation has persisted for over three decades of AI research suggests we're bumping against something more fundamental than a temporary technical hurdle. It hints at the difference between pattern matching—however sophisticated—and genuine understanding.
The Productivity Revolution We're Already Living
None of this diminishes AI's transformative impact on how we work. The productivity gains from current AI tools are real and substantial. I can research topics faster, write more clearly, and tackle analytical tasks that would have taken hours just a few years ago. Developers ship code faster, designers iterate more rapidly, and analysts process data at unprecedented scale.
These improvements compound across organisations and industries. When millions of knowledge workers become 20-30% more efficient at routine cognitive tasks, the aggregate effect reshapes entire sectors. We're witnessing the automation of what economist David Autor calls "middle-skill" cognitive work—tasks that are routine enough to systematise but complex enough to require sophisticated tools.
This productivity revolution deserves recognition without hyperbole. AI is proving to be less like electricity (transforming everything it touches) and more like the spreadsheet—an incredibly powerful tool that changed how we work without fundamentally altering human intelligence or decision-making.
Why the Hype Cycle Matters
The disconnect between AI's actual capabilities and public perception isn't just academic—it has real consequences. Overinflated expectations lead to misallocated resources, unrealistic project timelines, and eventual backlash when reality falls short of promises.
We've seen this pattern before with previous technology waves. The dot-com boom promised to revolutionize commerce and communication, which it ultimately did—but not in the timeframe or manner initially imagined. The intervening crash and recovery taught us valuable lessons about distinguishing between transformative potential and immediate reality.
Today's AI hype risks creating similar distortions. Companies rush to add "AI-powered" features without clear use cases. Investors fund startups based on impressive demos rather than sustainable business models. Workers fear displacement by systems that, while powerful, remain far from true human-level reasoning.
Finding the Middle Path
The challenge isn't choosing between AI evangelism and skepticism—it's developing a more nuanced understanding of what these tools can and cannot do. This middle path requires acknowledging both AI's remarkable achievements and its fundamental limitations.
AI excels at augmenting human capabilities within established domains. It struggles with genuine innovation, contextual understanding, and problems requiring extended chains of reasoning. It can help us work faster and more efficiently, but it cannot yet replace human judgment in complex, ambiguous situations.
This perspective suggests a more measured approach to AI adoption. Rather than asking whether AI will solve all our problems or prove to be another overhyped fad, we might ask: How can we best leverage AI's strengths while remaining clear-eyed about its weaknesses?
The Path Forward
As we navigate this AI moment, several principles can guide our thinking:
Embrace pragmatic adoption. Use AI where it demonstrably improves outcomes, but don't force it into every workflow just because it's trendy.
Maintain human oversight. AI works best as a powerful assistant, not an autonomous decision-maker, particularly for consequential choices.
Invest in AI literacy. Understanding what these systems can and cannot do helps us use them more effectively and avoid common pitfalls.
Stay curious about limitations. The Tower of Hanoi problem and similar challenges remind us that impressive performance in one domain doesn't guarantee competence in another.
The AI revolution is real, but it's not the revolution many predicted. Instead of artificial general intelligence that matches or exceeds human reasoning, we have sophisticated pattern-matching systems that excel at specific tasks while struggling with others that seem elementary to human minds.
This reality is neither disappointing nor diminishing—it's simply different from the science fiction narratives that shape our expectations. By embracing this more nuanced view, we can harness AI's genuine strengths while avoiding the pitfalls of both uncritical enthusiasm and reflexive dismissal.
The future belongs not to humans or AI alone, but to those who understand how to combine both most effectively.