NOW LOADING ...

0%

← Back to blog
3 March 2026 aiagentsenterprise

Agents Are Ready. Most Organisations Aren't.

40% of AI agent projects will be scrapped by 2027. The model is almost never why.

Originally posted on Substack

There is a moment in almost every agent project where everything looks like it’s working.

The demo lands. Stakeholders are impressed. Someone with a budget signs off. A pilot begins.

And then the real environment shows up.

Not the clean data. The actual data messy, inconsistently formatted, split across three systems that were never designed to talk to each other. Not the simple workflow. The one with seventeen exceptions that only the most experienced person on the team knows about. Not the API that worked in the test environment. The production one, behind a firewall, requiring a security clearance nobody thought to request before the pilot started.

I have been in this moment more times than I want to count.

And Gartner has now put a number on it. 40% of AI agent projects will be scrapped by 2027. Only 11% of companies have agents fully operational in production right now.

The model is almost never the reason.

The Pattern

Here is what I keep seeing.

A team gets excited about an agent. They pick a real use case, something painful, something that would save hours every week if it actually worked. They choose a platform. They build a demo. The demo impresses people.

Then the pilot starts. And the first thing that happens is someone in IT asks how the agent will access the CRM.

That question surfaces the fact that the CRM has access controls the agent can’t navigate without a formal permission request. That request goes to security. Security asks what data the agent can see. Nobody has a complete answer. The pilot pauses.

That pause is where most projects quietly die.

There’s no dramatic moment of failure. No single decision that kills it. The agent just sits there, technically working, while the organisational machinery catches up to the fact that someone built a system with dependencies nobody mapped.

Demo vs Production

The pattern is not about platform choice. It’s not about model capability. It’s not even about the use case.

It’s about the gap between what gets scoped in the demo and what actually exists in production.

In a demo, data is clean. Permissions are assumed. Exceptions don’t appear because you designed the scenario to avoid them. The integration points that would cause problems in a real environment are either mocked or ignored.

In production, none of that is true.

The agent needs access to real systems. Those systems have owners, compliance requirements, and audit trail expectations that weren’t part of the build conversation. The workflow has edge cases the original scoping never surfaced because nobody mapped the full process before deciding what to automate.

I’ve built agents on almost every major no-code platform now. Some worked. Some didn’t. And the difference was almost never the platform.

It was what I did before I opened it.

Ask Before You Build

Before I build anything now, I spend more time asking than building. What are the actual data sources this agent needs, and who owns each one? What does the output need to look like, and where does it need to land? What does a failure look like and who is accountable when it happens?

Those questions feel slow. In a space that moves this fast, slowing down feels like falling behind. But they are the difference between a demo and something that runs in production six months from now.

Tested as Models, Not as Systems

The uncomfortable truth about the 40% that will be scrapped is this: most of them were never really tested.

They were tested as models. Not as systems.

A demo tests whether the AI can perform the task in ideal conditions. That’s worth knowing. But it doesn’t test whether the infrastructure required to deploy that agent at scale actually exists in your organisation.

In most organisations, it doesn’t. Not yet. Not because the organisation is behind or incapable. Because the infrastructure conversation got skipped in favour of the capability conversation. Because the demo was so good that it felt like the hard work was done.

It wasn’t.

Slow Before Fast

The agents that make it to production are not the most impressive ones in the demo. They’re the ones where someone did the unglamorous work upfront: mapped every data dependency, understood every permission requirement, walked the exception-heavy parts of the workflow before writing a single prompt.

Slow before fast. Every time.

The agent can be ready on day one. The organisation takes longer.

That’s not a failure of AI. That’s just the reality of building inside systems that were never designed with AI in mind. The question is whether you account for it before the pilot starts, or discover it after it stalls.

Most teams are still discovering it after.