April 2026

The Undesigned Middle

Most organizations are deploying AI. Far fewer are building anything that earns trust.


ABSTRACT

Fewer than 6% of organizations report meaningful financial impact from their AI investments, despite near-universal adoption. The technology is not the constraint. The structure beneath it is. This piece names the gap — the Undesigned Middle — and makes the case for closing it. The argument is built from direct engagement across industries: the questions worth answering before building, and the structural layers that determine whether AI transformation earns trust or erodes it. Organizations that get this right grow revenue 41% faster and profit 49% faster than those that do not. This is not a technology problem. It is a design and leadership problem — and one every organization deploying AI today has the means to address


A new CEO arrives at one of the world's largest logistics companies with a clear mandate: use AI to move the organization from a volume-driven shipping operation to a data-driven, intelligent network — making supply chains smarter for everyone.

On one side of the organization, teams are losing customers and scrambling to restore the basics — on-time delivery, fast resolution, the fundamental reliability that trust is built on. On the other, the AI transformation agenda is gathering momentum — new platforms, new capabilities, a new operating model.

Both efforts are real. Both are urgent. Neither has a clear line to the other.

And that space between them — the place where a CEO's transformation vision meets the customer who just needs their shipment to arrive — is largely undesigned.

Nobody has a word for that space yet. This is an attempt to name it.


THE REFRAME

Every organization is asking the wrong question.

Not how do we deploy AI — but what are we building, for whom, and how will we know when it's working.


Every day, I work with leaders navigating one of the most consequential shifts of our time. Over the last two years, I built and led a practice around a specific thesis: that human-centered AI transformation outperforms cost-driven transformation — for customers, for employees, and for the bottom line. What I found, across every engagement, was the same gap — and it had nothing to do with the technology. The technology was moving fast enough on its own. What I am watching, across industries and organizations of every scale, is the gap between what AI makes possible and what organizations are actually building with it.

Most transformation efforts begin with the right question. What could we do if we had this capability? The roadmaps get built. The platforms get stood up. The pilots show promise. And then, somewhere in the middle of it all, the work changes character. It stops being about what to build and starts being about how to make everything cohere — across teams, across channels, across the moments that actually matter to the people the system is supposed to serve. This is where most transformations slow down. Not because the vision was wrong. Because the structure beneath it was never defined.

That second question is harder. It requires leaders to stop adding to the system and start defining it. To move from capability to intent. Most organizations have no framework for it — which means they have no way to see the gap, let alone close it.

| The gap between intent and impact is no longer a design problem. It is a structural one. Call it The Undesigned Middle.

The data bears this out. McKinsey's 2025 State of AI survey found that fewer than 6% of organizations report meaningful EBIT impact from their AI investments.1 The technology is not the constraint. The structure beneath it is.


THE DIAGNOSIS

The story is not unusual. It is the pattern.

The gap between transformation agenda and customer reality — and what it actually takes to close it.

We were brought in through a narrow door — a request to redesign key transactions on an organization's website. What we found underneath was something the brief had no language for. On one side, urgency to fix what was broken: customers being lost, trust being eroded, the basics not working. On the other, the AI transformation agenda was gathering momentum — new platforms, new capabilities, a new operating model. Each effort had its own leadership, its own roadmap, its own definition of success. And no one had been asked to connect them.

What became obvious, quickly, was that the request itself was the wrong unit of work. The question was not how to improve the digital touchpoints. It was whether the organization understood what it was actually building — for whom, and to what end. Who is this for, really? Not the large enterprise account, but the small business owner for whom a missed delivery is not an inconvenience — it is a broken promise to their own customer.

| The dashboard looks good. Yes, the customers are in pain.

So instead of designing moments, we proposed designing for change. End-to-end. Across all channels. It required the global team to coordinate in ways it hadn't before. It required someone to ask the questions the transformation agenda had skipped — and name the gap as a problem worth solving.

The mental models most leaders bring to experience and AI were formed in a different era. An era when experience meant the interface. When AI meant automation — cost out, speed up, scale fast. These are not wrong ideas. They are incomplete ones. And at the scale AI now operates, incomplete mental models do not produce modest shortfalls. They produce systems that are fast, capable, and quietly misaligned with the people they were built to serve.

Every project begins with a reframe — the moment we understood what was actually being asked, and why the answer to the wrong question would not be enough.


BEFORE THE STRUCTURE

Four questions. Answer them before you build.

The questions most organizations skip — and why skipping them is no longer an option.

I have seen what happens when these questions go unasked — the purpose question answered too late, the wrong person centered in the room, success focused on narrow cost savings alone. These are not new questions. They predate AI, predate digital transformation, predate most of the platforms being built today. Every good product, every service worth trusting, was built by someone who answered four questions before they built anything. The discipline separates work that endures from work that merely ships. AI does not change these questions. It makes skipping them unforgivable.


WHY

The purpose test. It sounds simple until you sit in a room where the answer is "reduce cost" dressed up as "improve the customer experience." The systems built from each intention feel entirely different to the people who use them. Purpose is not a mission statement. It is the decision that everything else follows from. Without it, capability fills the vacuum — and at scale, that produces consequences no dashboard was designed to catch.

WHO

Whose needs are you actually designing for? This is not a demographic question. It is a power question. Whose needs are centered in the room where decisions are made — and whose are assumed, approximated, or simply absent? The answer is almost always the person who can least afford for the system to get it wrong. Naming them is an act of organizational courage — because it means choosing whose experience takes priority.

WHAT MATTERS

How will you know if it's working — before you launch? A system built without agreed measures of human and business impact is not an experiment — it is a guess with consequences. But here is the harder truth: the right measures are not just better efficiency metrics. A call center that tracks response time, handle time, and agent tone — while the person on the other end still has no resolution — is not measuring service. It is measuring activity. The discipline is not finding better proxies. It is deciding, before you build, that the person's actual outcome matters as much as the numbers that are easier to count. Most organizations can tell you their efficiency metrics. Fewer can tell you whether anyone trusts them more.

DO NO HARM

This is not a legal obligation, though it is that too. It is a professional ethic. AI systems inherit the blind spots of the teams that build them, the gaps in the data they were trained on, the assumptions baked into decisions made long before deployment. The question is not whether bias exists in the system. It is whether anyone is asking. Every leader building these systems bears this responsibility. The only question is whether they choose to own it.


These four questions are not a framework bolt-on. They are the precondition. And in most enterprise AI transformations today, they are the work that nobody is doing.


THE PATH AHEAD

The structure that turns intent into reality

Understand who you are building for. Respond across every moment. Decide where human judgment must lead.

UNDERSTAND

Most enterprise AI systems know a great deal about their users. What they rarely know is why. Not the inferred why of a recommendation engine — the human why. The context a person carries into an interaction that no data point captures.

| Do you actually know these people — or do you know their data?

In one engagement, redesigning how a global professional services firm listened to its most significant client relationships, we started with fifty-five direct conversations with C-suite leaders and the teams who worked closest with them — on both sides of the relationship. What emerged was not a set of satisfaction scores. It was a map of the moments that mattered most and were least understood — where a client needed a partner to see around corners and instead received a process, where the relationship was strong on paper and fragile in practice, where the work being done was never connected to the outcome the client actually cared about.

RESPOND

Understanding who the system is being built for is necessary. It is not sufficient. The harder problem is what the system does with that understanding — across every channel, every moment, every human state a person might arrive in.

| The system either understands you — or it doesn't. Every interaction is a signal about whether you are valued.

n another engagement, partners were making significant purchases — and then, when the delivery date arrived, hearing nothing. As one partner put it: we make the big purchase, and when the delivery date rolls around, silence. That silence is not a notification failure. It is a trust failure. The app changed that — real-time visibility, proactive alerts, options when things went wrong, and a public commitment to on-time performance measured against stated goals. The hard part was not the technology. It was the organizational courage to make the commitment visible.

DECIDE

The most consequential design decision in any AI system is not what it does. It is what it does not do — and who is responsible when it gets that wrong. Every system that touches a human life reaches moments where the stakes are too high for automation to be the right answer. They are the moments that determine whether trust is built or broken.

| Not how much can be automated. How much should be. That question belongs to humans.

When we redesigned the unified portal for the State of New Mexico Human Services Department — used by more than half the state's population — the stakes were not abstract. Residents navigated four separate division offices to access benefits, recertifying repeatedly to avoid losing them. Then one woman told us she could not go to the child support office. Doing so would reveal her location to an abusive partner. The system had no way to know that. A workflow could not decide that. A person had to.


THE IMPERATIVE

The organizations that will matter will do this work right.

The work that determines whether everything else holds together.

AI is moving faster than most organizations can design for. The gap between what is being built and what people actually experience is not closing — it is widening. Every quarter that passes without naming and closing the Undesigned Middle is a quarter of customer trust eroded — quietly, at scale. And customers are not the only ones paying the cost. Every employee at the front line is absorbing friction that the system was never designed to resolve — carrying the gap between what AI promised and what it actually delivered.

The organizations that will get this right are not the ones moving fastest. They are the ones willing to stop — and ask the right questions before they build. And the ones that do will not just build better products. They will build better businesses. The evidence is consistent. Organizations that lead on customer experience grow revenue 41% faster and profit 49% faster than those that do not.2 McKinsey's 2023 research found that experience-led organizations achieve cross-sell rates 15–25% higher and share of wallet 5–10% greater than their peers.3 Cost-driven transformation produces finite gains. Experience-led transformation compounds.

The Undesigned Middle is not a gap that closes itself. It is structural. It is human. And it is, in the end, the only work that compounds — into retention, into revenue, and into a competitive position no technology investment alone can build.


1 McKinsey & Company, The State of AI 2025: Agents, Innovation, and Transformation, November 2025. Of 1,993 survey respondents across 105 nations, only 5.5% reported that more than 5% of their organization's EBIT and "significant value" were attributable to AI.

2 Forrester Research, 2024 US Customer Experience Index, June 2024. CX leaders vs. CX laggards across revenue growth, profit growth, and customer retention.

3 McKinsey & Company, “Experience-led growth: A new way to create value,” March 2023.

* Client organizations referenced in this piece have been anonymized at their request. Case details have been reviewed for accuracy.