April 2026

What We Mean When We Say Human

Every debate about AI and humans is actually three debates at once. Until you separate them, you can't answer any of them.

A cognitive psychologist I know had an argument with her son. He's a VC. He'd built an AI agent to do research on companies he was considering funding — the kind of deep, time-consuming work that used to require a junior analyst with domain knowledge. His mother told him he'd still need a human expert in the loop. He pushed back: domain experts bring their own bias. Their pattern recognition is shaped by what they've seen, what they've read, who they've worked with. Why assume that's better than the model?

It's the kind of question that doesn't resolve cleanly — because it shouldn't.

What sounds like one question is actually three. And when you try to answer all three as one, you end up with a statement that any thoughtful person can immediately contradict — because the opposite is also true. The three questions are judgment, expertise, and accountability. They are not the same thing. Treating them as one is how this conversation always ends in stalemate.

ONE

Judgment

Judgment is not about incomplete information. It is about genuinely competing values.

A patient asks: should I take the experimental treatment? The data can tell you the survival odds. It cannot tell you what this person is carrying into that question — what they've already been through, what they're willing to go through again, what matters more to them than more time. That is a question the data was never designed to answer.

| Judgment without accountability is not judgment. It's computation.

TWO

Expertise

This is where the son's point lands hardest. And where he is most right.

A radiologist brings years of training to reading a scan — the accumulated pattern recognition of thousands of cases, the instinct that something is slightly off before they can name why, the judgment that knows when the textbook answer is wrong for this patient. It is, in many situations, the difference between a missed diagnosis and a caught one. An oncologist who knows not just that a good radiologist matters, but which specific radiologist to trust with a particular kind of scan, is carrying knowledge no model has been trained on, yet. That knowledge lives in relationships, in reputation, in years of watching who catches what others miss.

It is built on everything they have seen — the hospitals they trained in, the populations those hospitals served, the cases that made it into the literature and the ones that didn't. Experience is not neutral. Neither is the model. It carries the limits of whoever contributed the data, whichever populations made it into the record.

One bias is carried by a person. The other is baked into a system. Neither is neutral. But they are not the same bias, and in any given situation, one may be more costly than the other. In some domains AI already outperforms human experts. In others the human brings something the model cannot replicate, yet. Figure out which is which before you build, not after the system has already decided for you.

| What you have seen shapes what you see. That is true of the radiologist. It is true of the model.

THREE

Accountability

This is where the argument settles.

Who is responsible when the system gets it wrong? This is where the human is non-negotiable. Not because humans are better decision-makers. Because accountability requires a person. A system cannot be held responsible. A person must be.

| A system cannot be sorry. A system cannot make that right.

When something goes wrong — and it will — someone has to be able to look the person in the eye. That is not a process requirement. It is a human one.

On expertise, the ground is shifting — models are improving, human advantages are narrowing, and the honest answer in many domains is not yet. On judgment, the question was never capability — it was whether a machine can hold competing values on behalf of a specific person in a specific moment, and that remains genuinely open. On accountability, nothing shifts. There are moments when the only thing left is to be present for what went wrong. A system cannot do that.

The next time you are in the room where something is being built or decided — ask which question you are actually answering. Judgment. Expertise. Accountability. They are not the same. In most rooms, the first two get debated at length. The third gets assumed. That is where things go wrong.

Anuradha Sachdev built and led the North America customer service experience practice at Accenture Song — one of the first teams to design and deploy agentic AI in customer service at enterprise scale. She writes about what that work revealed.