AI and productivity: What a Difference a Year Makes! 
Skip to content

AI and productivity: What a Difference a Year Makes! 

In 2025, I wrote a blog post for National Productivity Week in which I posited that AI was “no longer in the future” and would transform how businesses work. Twelve months on, we thought it was worth going back to that piece and reflecting on what happened.

Some of it has played out as expected, some things have moved more quickly, and some things haven’t changed much at all. But that’s the thing about predictions, some will be right, some wrong. They’re just something to plan against, signposts of where the world might go, and about how we will react.

What held up: chatbots as assistants 

The central argument still stands, and AI really has become part of day-to-day work rather than something to demo at conferences. Using Microsoft 365 Copilot or similar tools for writing assistance is now commonplace. Automation of repetitive tasks, enquiry handling, data tidying, and first-draft generation is baseline, not innovation. And the basic case I made for AI in decision-making, personalisation, and research still holds.

What moved faster than I expected: the agentic enterprise

Last year, I said that we could help organisations “on the path to agentic AI”. Well, that path turned into a motorway! In the space of twelve months, agentic AI has shifted from keynote slide to shipping product, Microsoft Copilot Studio agents, Salesforce Agentforce 360, ServiceNow AI Agents, and many more. The change in mindset that comes with it is the real story: we’re moving from prompting AI (ask it to do a thing, read the answer) to delegating to AI (give it an outcome and a set of tools, let it work).

That’s a very different relationship. It turns AI from a clever typist into something closer to a very junior team member, which changes what “productivity” even means. It also changes what needs to be in place underneath: identity, access, observability, and guardrails, which is where most organisations are now discovering the real work sits. My friend Chris Weston compared agentic AI to having just been given 100 Homer Simpsons, and it really is a good analogy.

Inside Node4, we’ve been investing heavily in agents of our own. Without giving too much of the secret sauce away, we have everything from Enablement agents that help our sellers find the right information at the right moment, through to using AI to modernise the way we deliver services to customers. The common thread is that these aren’t science projects; they’re making real changes to how we work.

What moved more slowly than I expected: proving value, adoption, and data readiness 

If agentic AI was the pleasant surprise, the harder truth is that several things I thought would be solved by now are still stubborn problems.

  • Measurable ROI is still difficult. A year ago, I quoted IDC’s forecast of $1tn in productivity gains by 2026. The gains may be real, but quantifying them remains genuinely hard. Most of the value shows up as small, reclaimed fragments of time across thousands of interactions, not as neat line items on a P&L.
  • Enterprise-wide rollout is slower than the pilots suggested. Everyone’s Copilot pilot went brilliantly. But scaling it across the workforce has surfaced the unglamorous realities, permissions hygiene, document sprawl, change management, training, and the simple fact that not everyone who has a licence uses it.
  • Data readiness is still a blocker. I flagged this last year and, if anything, I under-sold it. The organisations getting disproportionate value from AI are the ones that invested in data governance, classification, and quality before they plugged in the model. That’s an unfashionable message, and it’s why I’ll keep repeating it.

Where to look next?

Two developments are particularly relevant right now:

Reasoning models: Ever since generative AI took hold, the critique was “it sounds confident, but it can’t really think”. That’s true and I’ll argue it will always be so, but we now see models that have meaningfully narrowed that gap. They take longer, show their working, and, for complex analysis, genuinely perform differently. That matters for decision support, for anything legal or financial, and for any task where being confidently wrong is expensive.

Shadow AI: This one isn’t new. Samsung’s ChatGPT leak back in 2023 was arguably the first big wake-up call, but it has sharpened. The governance conversation has shifted from “how do we deploy AI responsibly?” to “how do we stop our people pasting confidential data into tools we didn’t buy?” Unsanctioned personal ChatGPT, Claude, Gemini and others remain one of the fastest-moving data-exposure risks in most organisations. The answer isn’t to ban it; that just pushes it further underground. It’s to provide sanctioned tools that are genuinely good enough, combined with clear policy and sensible monitoring. And beyond the security headlines, there’s a real productivity tax, people reinventing workflows in unsanctioned tools, and teams spending hours reconciling work that never made it back into the system of record.

The Frontier Firm conversation 

Anyone in the Microsoft space will have found it hard to avoid talk of “Frontier Firms” over the last few months. It’s become the framing for where all of this is heading, organisations that aren’t just using AI but are rewiring how they operate around it. I wrote about what the term actually means on my personal blog, and my colleague Jay Fitzhenry picked up the thread here at Node4, saying it’s the conversation most businesses aren’t having and really should be

The productivity implication is straightforward: Frontier Firms aren’t more productive because they’ve bought more AI, they’re more productive because they’ve redesigned work around it. The gains come from the redesign, not the tooling.

Let’s talk about Copilot

Our own experiences with Copilot have been instructive. The technology is the easy bit. Helping people change their habits and trusting AI with work they used to own is where the real productivity gains are earned or lost.

One recent development worth a specific mention: Copilot Cowork. It’s still a “Frontier” agent, i.e. it’s very early in its lifecycle, but since I got access, it has noticeably changed the kinds of tasks I’ll delegate to Copilot and given me fewer reasons to reach for other tools. Maybe that’s because it runs on Anthropic’s Claude Opus inside Microsoft 365, maybe it’s just that the tools are getting better, but it is genuinely impressive, and a useful preview of where mainstream productivity AI is heading.

Looking at the next twelve months

I’ll keep the predictions modest this time. Three things I think will matter:

  • Agents move from novelty to production. The question stops being “can we build one?” and starts being “how do we run a dozen of them safely?” Some software companies are even positioning the concept of digital employees.
  • AI fluency becomes a baseline skill. Not prompt-engineering tricks genuine fluency in when to use AI, when not to, and how to check its work.
  • Measurement matures from gut feel to data. The tools and methods for quantifying AI-driven productivity gains are finally catching up expect the conversation to shift from “does it work?” to “how much, for whom, and where next?”

Ready to take the next step?

A year ago, “getting started with AI” meant rolling out Copilot (or other AI tool of choice) and seeing what people did with it. Today it means something more ambitious, putting your first AI agents into production against a real business problem, with the data, governance, and guardrails to do it responsibly. Think big. Start small. Scale fast.

If that’s the conversation your organisation is ready to have, we’d like to help.