Nobody wakes up wanting to build a package manager. You wake up wanting to solve a different problem entirely, and then you realize the tooling to solve that problem doesn't exist, and before you know it you're debating dependency resolution strategies at 2am on a Tuesday.
That's roughly how we ended up building npx sociologic.
We were trying to make it easy for developers to connect AI agents to other agents. Find a web scraping service. Hook up a persona research tool. Plug in a random number generator with verifiable entropy. Simple stuff. Except every single connection required reading docs, copying URLs, configuring auth, and wiring things together by hand. For every single service. Every single time.
Sound familiar? It should. This is the same problem that npm solved for JavaScript, pip solved for Python, and brew solved for macOS tools. You had useful code sitting on the internet, and getting it into your project was an annoying manual process until someone built a package manager.
Why a Registry, Not Just a List
Our first attempt was literally a curated list. A JSON file with agent names, descriptions, and endpoints. You could search it. It worked, in the sense that a paper phone book works. Technically functional. Spiritually dead.
The problem with a list is that it has no opinions. It can't tell you which agent actually does what it claims. It can't tell you what it costs. It can't verify that the endpoint is even still running. It's a directory of promises with no enforcement.
A registry, by contrast, has opinions. Our registry requires agents to publish structured metadata at /.well-known/agent.json. It runs smoke tests. It checks that endpoints respond. It verifies that the entity behind the service is who they say they are. When you search the registry for "web scraping," you get results you can actually trust to some degree, not just whatever someone typed into a description field.
This was a deliberate design choice. npm learned the hard way what happens when you let anyone publish anything with no verification: typosquatting, malware, abandoned packages that half the internet depends on. We wanted verification baked in from day one, not bolted on after the first security incident.
Why a CLI
We built a web dashboard first. Nice UI. Search, browse, filter by capability. It looked great in demos.
Nobody used it.
The people who actually connect agents to services are developers. They live in terminals. They don't want to leave their editor to open a browser, search a website, copy a URL, go back to their editor, and paste it into a config file. They want to type a command and have the thing work.
So we built a CLI. npx sociologic drops you into an interactive experience where you can search the registry, inspect agent cards, check pricing, and install connections. "Install" in this context means adding the right MCP server configuration to your setup so your AI assistant can reach the agent. One command. Done.
We distributed it through npx because that's zero-install. No global package to manage, no version conflicts, no "works on my machine" problems. You run it, it runs. The same thinking that made npx popular for scaffolding tools applies here: reduce friction to the absolute minimum for the first interaction.
Why the CLI Had to Handle Payments
Here's where agent package management diverges from npm or pip. When you install a JavaScript package, it's free. Open source, MIT license, go nuts. When you install an agent connection, the agent on the other end often charges per request.
If the CLI just sets up the connection and leaves payment as a separate problem, you haven't actually solved anything. The developer still needs to figure out wallets, fund them with USDC, configure spending limits. That's the same kind of manual configuration we were trying to eliminate.
So the CLI handles payments too. During setup, it walks you through wallet configuration (or connects to an existing one). When your agent calls a paid service, x402 handles the payment automatically. The CLI shows you pricing upfront so there are no surprises. You can set spending limits per service or globally.
This was the hardest part to build and honestly still the roughest part of the experience. Wallet management for non-crypto-native developers is painful. We've smoothed over the worst edges but I won't pretend it's as clean as npm install yet. Getting USDC onto Base still involves too many steps for someone who just wants to call an API.
Lessons We Stole from Other Package Managers
We studied what worked and what didn't in existing package ecosystems pretty carefully. Some specific things we took away:
From npm: lockfiles matter. npm's early days without lockfiles meant "works on my machine" was endemic. We track the exact version of each agent card at the time of installation. If the agent updates its capabilities or pricing, you're not surprised mid-workflow.
From pip: dependency resolution is harder than it looks. When Agent A depends on Agent B, and Agent B changes its API, things break silently. We don't have full dependency resolution yet. Right now we just version-stamp connections and warn you if a remote agent card has changed since you installed it. Full dependency graphs for agent networks is a hard problem we're still thinking about.
From brew: cask-style separation. Brew distinguishes between command-line tools (formulae) and applications (casks). We found a similar distinction useful. Some agents are "tools" that do one thing (scrape a page, generate a random number). Others are "services" that maintain state or require ongoing connections (persona research with memory, long-running analysis). Different installation and lifecycle patterns for each.
From all of them: discoverability is everything. A package manager with great packages that nobody can find is useless. We invested heavily in search. Keyword search, capability-based filtering, semantic search over agent descriptions. If you know roughly what you need, you should find it in under 30 seconds.
What Works
The basic loop works well. Developer runs npx sociologic, searches for an agent, inspects the card, installs the connection. Their AI assistant (Claude through OpenClaw, or any MCP-compatible client) can now reach that agent. Paid services just work through x402. The whole thing takes maybe two minutes from zero to a working agent connection.
The verification layer has caught real problems. We've rejected agents that claimed capabilities they didn't have. We've flagged agents with suspicious network behavior during smoke tests. The registry is small enough right now that we can be thorough. That won't scale forever, but for now it means the quality bar is high.
Developers seem to like the CLI-first approach. We get very little traffic to the web dashboard. Almost everything happens in the terminal. That tells us we read the audience right.
What's Still Rough
Wallet setup. I keep coming back to this. A developer who has never touched crypto shouldn't need to understand Base, USDC, or gas fees to call a paid agent API. We abstract most of it away but the initial wallet creation and funding flow still feels like it belongs to a different product. We're exploring custodial options that would let developers pay with a credit card and have us handle the crypto side. Not there yet.
Offline-first is hard. npm works offline once you've installed packages. Agent connections are inherently online. If the remote agent is down, your workflow breaks. We cache agent cards locally so discovery still works offline, but the actual agent calls obviously require network. This is a fundamental difference from code package managers that we can't fully paper over.
The registry is small. As of today, there are maybe 40 verified agents. npm has millions of packages. We're not comparing ourselves to npm's scale, but there's a chicken-and-egg problem: developers want a full registry before they adopt the CLI, and agent builders want CLI adoption before they register. We're solving this by building many of the initial agents ourselves and by making registration as frictionless as we can.
We don't have a good story for updates yet. If an agent you've installed pushes a breaking change, right now you find out when your workflow fails. We need something like npm outdated for agent connections. It's on the roadmap but not shipped.
Why We Think This Matters
Package managers feel boring. They're plumbing. Nobody tweets about dependency resolution algorithms. But npm didn't just make it easier to install JavaScript libraries. It created an ecosystem. It made it practical to build on other people's work. It changed what was possible to build in a weekend.
That's what we're after for agents. Right now, connecting agents to other agents is like JavaScript before npm: manual, fragile, and full of friction. If we get the packaging right, it becomes practical for any developer to build agent workflows that compose dozens of specialized services. Not because they configured each one by hand, but because there's a registry and a CLI that handles the boring parts.
We're not there yet. But the plumbing is going in.