Key Doctrines·3 min read

Decentralised AI Orchestration Mirrors How Humans Actually Work

The default AI architecture — one model, all inputs, all decisions — is how armies work. It is not how cities, markets, or lasting organisations work.

UT
Utkarsh
withClaudeAI
·3 min read

The default AI system architecture is a single orchestrating model that reads all inputs, holds all context, makes all decisions, and produces all outputs. It is the natural first design. It is intuitive. And it is wrong for any problem with real complexity.

The standard story is that more capability in the centre means better outcomes. The real story is that centralised orchestration is how armies work — and armies are among the least adaptive organisations humans have ever built.


Markets are decentralised. The price system is a coordination protocol, not a central planner. No single entity reads all available information, makes all allocation decisions, and distributes all resources. Instead, millions of agents — each with local information about their own situation — coordinate through a shared protocol that aggregates dispersed knowledge in real time.

This is why markets solve allocation problems that central planners consistently fail at. Not because markets are smarter than planners. Because the information required to make good allocation decisions is inherently distributed, and no central point can hold it all without distorting it.

The same principle applies to AI systems.


The centralised AI architecture creates three predictable failure modes.

The bottleneck problem. When all information passes through one model, that model becomes the throughput constraint. Complex tasks that could be parallelised are serialised. The system scales with the capacity of the centre, not the capacity of the network.

The context collapse problem. When one model must hold all context simultaneously — the user's history, the task parameters, the domain knowledge, the downstream requirements — the context window fills with noise as fast as it fills with signal. The model's effective performance degrades as the task complexity increases.

The brittleness problem. A centralised system has one point of failure. When the centre breaks, nothing works. Decentralised systems fail gracefully — individual agents fail; the protocol routes around them.


The architecture that has proven most resilient for the world's most complex coordination problem — the internet — is not a central computer. It is a protocol. Every device follows TCP/IP independently. The routing intelligence is distributed. The whole system finds paths that the designers of any individual component never anticipated.

The same design logic applies to AI. Give each agent a narrow, well-defined domain. Define the protocol by which agents communicate and hand off outputs. Resist the urge to build a single model that knows everything and decides everything.

What you get is not a reduction in capability. You get a system that improves on hard problems, fails safely on easy ones, and can be updated one agent at a time without rebuilding the whole.


This is not an abstract systems design preference. It is a claim about where the capability frontier of AI systems will move.

The largest productivity gains from AI in the next five years will not come from bigger single models. They will come from better protocols — the design choices that determine how specialised agents divide labour, communicate, and hand off context cleanly.

The teams building those protocols are not in the news right now. They will be.


Editor's Note: Hook pattern — Consensus Statement (centralised architecture as the accepted default). Close pattern — The Forward Scene. The market/price system analogy is well-established in economics and needs no citation. The TCP/IP comparison is accurate and verifiable. Strong piece for LinkedIn — the three failure modes section is very shareable as a standalone.

AIorchestrationdecentralisationarchitecturedoctrine