Orger
← The Field Manual

Centralized vs. Decentralized AI Teams: Which Works Better?

Centralized AI teams own all agents from one place. Decentralized teams let each function own its own. Most companies start centralized and federate over 18 months. Here's how to decide which model fits where.

TL;DR

Centralized AI teams (one platform team owns all agents) optimize for consistency, security, and infrastructure. Decentralized AI teams (each function owns its own agents) optimize for speed and domain fit. Most companies start centralized to build the foundation and federate to domain-owned agents over 12 to 18 months. The right answer is rarely pure either way: it's a split where infrastructure stays central and product-side agent design stays distributed.

The question of whether to centralize or decentralize an AI team sounds like a structural debate, but underneath it's a question about where the bottleneck is. Centralized AI teams (one platform team that owns all the agents in the company) win when the bottleneck is consistency, security, and shared infrastructure. Decentralized AI teams (each function builds and owns its own agents) win when the bottleneck is domain expertise and speed of iteration. Most companies have both bottlenecks at once, which is why almost every successful AI-augmented org ends up with a hybrid that splits the work along clear lines.

The pattern that keeps working is this: infrastructure stays central, product-side agent design stays distributed, and the handoff between the two is a clearly defined platform that domain teams build on top of. Companies that try to centralize everything end up with a platform team bottleneck. Companies that try to decentralize everything end up with twelve teams rebuilding the same broken security model. The right answer is to be deliberate about which work belongs in which layer.

What centralized AI teams actually do

A centralized AI team is a single function (usually called Platform, AI Engineering, or AI Operations) that owns the entire agent stack for the company. They build the agents, run the infrastructure, monitor performance, handle incidents, and shepherd new agent requests from other teams.

This model has real advantages, especially in the early days of an AI rollout.

Consistency is the obvious one. Every agent uses the same observability stack, the same identity model, the same evaluation framework, the same deployment pipeline. When you only have one team building agents, the agents look like they were built by one team.

Security is the less obvious but more important one. AI agents access company data, customer data, sometimes financial systems. A single team that owns all the agents can build proper guardrails (data access controls, audit logs, kill switches, rate limits, secrets management) once and apply them everywhere. When every function builds its own agents in isolation, every function ends up reinventing security, badly.

Infrastructure is the structural one. The platform itself (the message bus, the agent runtime, the LLM gateway, the eval suite, the cost tracking, the prompt versioning) takes serious engineering to build well. A centralized team can amortize that cost across every agent in the company. Distributed teams cannot.

The downside of centralization is that the platform team becomes the bottleneck. Every new agent request goes through their queue. They prioritize based on their visibility, which is incomplete. The marketing team wants an agent that knows brand voice, and the platform team builds something that technically works but misses the nuance. Domain expertise gets lost in translation. Six months in, the platform team has built thirty agents and the operating teams complain that none of them quite work.

What decentralized AI teams look like

The decentralized model goes the other way. Each function builds and owns its own agents. Sales builds the pipeline agent. Marketing builds the brand voice agent. Finance builds the financial close agent. Each function has someone (often a senior IC or an embedded engineer) responsible for their function's AI work.

The advantages are real here too.

Domain fit is the obvious one. The team that builds the pipeline agent knows what the pipeline looks like, what edge cases matter, what the sales reps actually need from it. They can iterate fast because they're the customer too. There is no translation layer between "what the function needs" and "what gets built."

Speed is the structural one. A decentralized team doesn't have to wait for a platform team queue. They build, ship, learn, and rebuild on their own timeline. The cycle time on agent improvement is usually 5x to 10x faster.

Ownership is the cultural one. When a team builds and owns its own agent, they understand it, trust it, and use it. When the platform team hands an agent to a function, the function often treats it as a black box, doesn't trust it fully, and uses it less than they could.

The downside is everything the central model solved. Inconsistency. Duplicated infrastructure. Twelve different security models, eleven of which are broken. No shared observability. Cost tracking is impossible because every team is making its own LLM API calls with its own keys. Compliance audits become a nightmare.

A purely decentralized model only works in companies that already have strong distributed engineering culture and the depth to staff every function with serious AI engineers. That is roughly zero companies under 500 people.

The hybrid pattern that actually works

After watching dozens of companies attempt both ends and converge, the pattern that holds up is a deliberate split.

Central platform team owns:

  • The agent runtime (where agents execute, how they're deployed, how they fail safely).
  • Identity and access control (who and what can authenticate, what data each agent can read).
  • Observability (logs, metrics, traces, eval results, cost tracking).
  • The shared message bus or coordination protocol.
  • Security review and incident response.
  • Compliance and audit infrastructure.

Domain teams own:

  • Agent design and scope (what the agent does, what KPIs it owns, what seat it fills on the accountability chart).
  • Prompts, system messages, behavior tuning.
  • Domain-specific evals (does the agent actually produce useful pipeline insights, marketing copy, financial summaries).
  • Day-to-day operation and incident triage for their own agents.
  • Decisions about when to retire or replace an agent.

The handoff is the platform itself. Domain teams build agents on top of the platform. The platform team provides infrastructure, tools, security, and observability. Neither team is doing the other's job.

This split works because it puts each kind of decision in the hands of the team best positioned to make it. The platform team can't write a great prompt for the sales pipeline agent because they don't understand the pipeline. The sales team can't build proper audit logging because that's not their core competence. Each function does what it's good at, and the line between them is the platform.

How most companies actually get there

The maturity curve almost always goes the same direction. It starts centralized, federates over time, and ends in the hybrid.

Months 0 to 6: Fully centralized. One team builds the first three or four agents. Infrastructure is being built from scratch. There aren't enough domain experts in other functions who know how to work with agents yet. Centralization is correct because the bottleneck is "we haven't built a platform yet."

Months 6 to 12: Centralized with embedded engineers. The platform team has built the basics. Functions are starting to want their own agents. The pattern that emerges is an embedded model: a platform engineer rotates into the sales team for three months to build their pipeline agent. The platform team still owns the infrastructure, but ownership of the agent is beginning to move outward.

Months 12 to 18: Federation begins. Domain teams have hired or trained their own AI-fluent ICs. They start building their own agents on top of the platform. The platform team's job shifts from "build all the agents" to "make it easy for other teams to build agents safely." This is the most painful transition because the platform team has to give up control of work they did themselves.

Months 18 to 24: Hybrid steady state. Each function owns its agents. The platform team owns the platform. There is a clear catalog of what's centralized and what's federated, and the boundary is defended by both sides.

Companies that skip the first phase and try to decentralize from day one usually end up with three teams building three different platforms badly. Companies that refuse to federate and stay centralized forever end up with a platform team queue ten months deep and seven functions that have given up waiting and built shadow agents.

The right answer is to start centralized intentionally, federate intentionally, and not pretend either phase is permanent.

The decision that actually matters

The centralized-versus-decentralized debate gets framed as a structure question, but the underlying decision is about where you place the burden of consistency.

If you centralize, you place the burden of consistency on the platform team. They have to keep up with the velocity of every function that needs an agent. They will fall behind. The trade-off is that the infrastructure stays clean.

If you decentralize, you place the burden of consistency on the platform layer itself. You build a platform so good that the domain teams choose to use it because it's faster than rolling their own. You let the domain teams move at their own speed within the guardrails. The trade-off is that you have to invest heavily in platform quality, because if the platform is bad the teams will route around it.

The hybrid is the bet that you can build a good enough platform to let teams federate without losing consistency. It's a real bet, and it requires actual platform engineering investment. Companies that try to do the hybrid without the platform investment just get the worst of both models.

What to do this quarter

Three moves matter most depending on where you are.

If you have zero or one AI agent in production: Centralize. Pick one team (probably engineering, possibly a new "AI Platform" team) and make them accountable for the first wave of agents. Don't let every function try to build their own yet. You don't have the infrastructure to support it.

If you have five or more agents and the central team is becoming a bottleneck: Start federating. Pick one domain team that has strong technical ICs (probably product, sales ops, or marketing ops) and embed a platform engineer with them for ninety days to transfer ownership of one agent. Use that as the prototype.

If you already have a dozen agents and a platform team: Audit the line. What is the platform team actually doing that domain teams are duplicating? What are domain teams doing that should be platform? Be explicit about the split and update the accountability chart so each function owns its own agents while the platform team owns the platform.

The structure isn't the goal. The goal is agents that work, in functions that own them, on a platform that scales. The structure question is just the question of how you get there without rebuilding the same broken pieces three times.

Now map your AI-augmented org.

Drop in your team. Add the AI agents. See the whole picture. Free forever for your first chart.

Build your chart on Orger →