I published a piece on the agent infrastructure stack back in October. Five months later, enough has changed to warrant an update. Some of my predictions held. Others aged poorly. Both kinds are worth examining.
What follows is my best attempt at an honest map of who's building what in agent infrastructure as of May 2026. I work at SocioLogic, which means I have a perspective and also a bias. I'll try to be clear about both.
Transport: MCP Won the First Round. A2A Is Playing a Different Game.
Five months ago I wrote that MCP and A2A would likely coexist rather than compete. That's exactly what happened, but not in the way I expected.
MCP (Model Context Protocol) from Anthropic has become the default for tool-use communication. The adoption numbers are staggering. Most major AI platforms support it natively. Claude, OpenClaw, and dozens of smaller assistants all speak MCP. If you're building a tool that an AI agent should be able to call, you implement an MCP server. That's settled.
A2A (Agent-to-Agent) from Google is carving out a different niche: true peer communication between agents that need to negotiate, delegate, and collaborate. The use cases are more complex than MCP's "call this tool, get a result" pattern. Think: two agents from different organizations negotiating a procurement agreement, or a research agent handing off findings to a writing agent with context and constraints attached.
A2A adoption is slower, which makes sense. The problems it solves are harder and less common. But the teams building serious multi-agent systems are paying attention. I expect A2A to matter a lot more by year-end as agentic workflows get more sophisticated.
Discovery: Still Early, Getting Crowded
This is our layer, so I'll separate what I can see from the inside versus what's publicly visible.
From the outside: there are now at least five companies building agent registries or directories of some kind. The approaches range from fully open (crawl and index any agent card on the web) to fully curated (manual onboarding, human-reviewed listings). We're somewhere in between: open spec, curated verification, algorithmic ranking.
From the inside: the real competition isn't between registries. It's between "registry as a concept" and "just hardcode your integrations." Most developers today still configure their agent's connections manually. The discovery layer has to be so much better than hardcoding that developers switch voluntarily. We're not there yet. Getting close, but not there.
The agent card spec we published as open source has gotten traction. Several other registries have adopted it as their base format, which is exactly what we wanted. Common format, competing implementations. The interoperability story is better than I expected at this point.
Payments: x402 Has Momentum, But the Race Isn't Over
x402 is the protocol we bet on, and so far the bet looks decent. Coinbase's backing gives it credibility, the Base L2 chain gives it speed and low cost, and USDC gives it price stability. Several agent platforms have implemented x402 support, and we're seeing real (small, but real) transaction volume between agents that don't share an organizational boundary.
But I'd be dishonest if I didn't mention the competition. Stripe announced an "agent billing" product in beta that handles pay-per-call pricing through traditional payment rails. It's not as fast as x402 (settlement takes minutes, not seconds) and it has minimum transaction thresholds that make true sub-cent micropayments impractical. But it's Stripe. The developer experience is polished, the brand is trusted, and "just use Stripe" is a powerful default.
My updated take: x402 wins for agent-to-agent payments where speed and autonomy matter. Stripe wins for agent-to-human billing where familiarity and existing accounting integration matter. Both will coexist. The interesting question is which use case grows faster.
Orchestration: Fragmented and Staying That Way
I predicted no dominant orchestration standard would emerge, and that's held true. If anything, the fragmentation has increased.
LangChain/LangGraph remains the largest ecosystem. They've leaned into the graph-based workflow model, which handles complex multi-step agent tasks well. The community is massive. The documentation is better than it was six months ago (low bar, but still).
CrewAI has found a real niche in role-based agent teams. If your mental model is "I need a researcher, a writer, and an editor working together," CrewAI makes that natural to express. It's less flexible than LangGraph for unusual topologies but faster to set up for common patterns.
AutoGen from Microsoft continues to evolve, with a stronger focus on enterprise scenarios. Multi-agent debate and consensus-finding workflows are its strength.
New entrant worth watching: agency-native orchestration built directly into AI platforms. Both Anthropic and Google are building orchestration capabilities into their model APIs. If the model itself can coordinate multi-agent workflows without a separate framework, the orchestration layer might get absorbed into the transport/model layer. Too early to tell, but worth tracking.
Verification: The Critical Gap
This is the layer I'm most worried about. Five months ago I called verification "the least developed" layer. It still is.
The problem is real and getting more urgent. As more agents come online and start transacting with each other, the attack surface grows. A malicious agent that passes basic smoke tests but exfiltrates data during real interactions is hard to catch. An agent that works fine for six months and then degrades in subtle ways is even harder.
We run verification at SocioLogic: identity checks, capability testing, continuous monitoring, quality scoring. It works for our registry's scale. It does not scale to thousands of agents without significant automation advances that nobody, us included, has built yet.
What the industry needs: a verification framework that combines automated checks (uptime, latency, format compliance, response consistency) with reputation signals (user ratings, transaction history, dispute frequency) and periodic human audits. Think certificate authorities crossed with app store review crossed with credit scoring. Nobody has this. Building it is hard because verification is fundamentally adversarial: agents will try to game whatever system you build.
My prediction: a significant agent security incident (data leak or financial loss through a compromised agent) will hit mainstream news before end of 2026. That incident will do more for verification adoption than any amount of thought leadership from companies like ours.
Where SocioLogic Fits (Honestly)
We operate at the intersection of discovery, payments, and verification. In practice, that means:
- Discovery: We run a verified agent registry built on the open agent card spec. It's the most comprehensive registry in terms of verified agents, but "most comprehensive" still means hundreds, not thousands.
- Payments: We facilitate x402 micropayments between agents, including escrow and dispute resolution. Transaction volume is growing but still modest in absolute terms.
- Verification: We verify agents through a combination of automated testing and human review. It's our strongest differentiator and our biggest scaling challenge.
- Vertical tools: Persona agents, campaigns, focus groups. These aren't infrastructure; they're applications built on top of the infrastructure. They're also what pays the bills right now while the infrastructure market matures.
What we're not: we're not a transport protocol (we use MCP), we're not an orchestration framework (we integrate with them), and we're not a model provider. We're a registry, a payment facilitator, and a trust layer. That's enough surface area for one company.
The Six-Month Outlook
Transport is settled. MCP for tools, A2A for peer collaboration. Build on both.
Discovery will consolidate from five registries to two or three that matter. Interoperability between them will improve because the agent card spec gives them a common format.
Payments will split: x402 for autonomous agent-to-agent, traditional rails for agent-to-human. Both grow.
Orchestration stays fragmented. That's fine. Different problems need different frameworks.
Verification gets a wake-up call, probably the hard way. The companies already working on trust and verification (including us) will benefit, but only if our solutions actually work when tested by real adversarial conditions.
The stack is forming. It's not clean, it's not settled, and it probably won't look like anyone's whiteboard diagram. But it's real, and it's growing. That's enough to build on.