If the Model Context Protocol gives you 2005 vibes, you’re not imagining it. MCP feels like the good part of the old web-services dream brought forward for AI agents: describe what you can do, let a generic client discover it, and integrate without bespoke glue. That was the pitch behind WSDL in the Web 2.0 era—Web Services Description Language—an XML contract where a service declared its operations, messages, and types so tools could generate clients automatically. If you’ve never seen it, the original specs still read like a time capsule: WSDL 1.1 and WSDL 2.0.
WSDL didn’t live alone. UDDI tried to be the yellow pages for services—governed discovery and publishing in one registry, with all the ceremony you’d expect from early-2000s enterprise IT (OASIS UDDI). And then there was DISCO, Microsoft’s pragmatic crawler. Point DISCO at a base URL and it would pull down the discovery artifacts—.wsdl
, .xsd
, .disco
, .dicomap
, and stash them locally so Visual Studio and friends could wire things up. The API for that mechanism still exists in the .NET docs under DiscoveryClientProtocol
. The loop was elegant on paper: describe, discover, integrate.
What went sideways wasn’t the idea; it was the weight. The WS-* stack accreted policy layers (WS-Policy), security headers (WS-Security), and interop profiles (WS-I Basic Profile) until most teams quietly fled to simpler REST designs. Registries became a governance project. XML made even trivial payloads feel baroque. The contract survived, but the joy didn’t.
MCP lands with the same underlying promise and none of the baroque baggage. Instead of SOAP envelopes, the wire is plain JSON-RPC 2.0. Instead of XSD types, tools are shaped by JSON Schema. Instead of central registries, discovery happens at runtime: the host connects and asks, “what can you do?”, and the server replies with a live inventory of tools, resources, and prompts. The model is spelled out at modelcontextprotocol.io.
The most important shift is the audience. WSDL spoke to code generators and SOAP toolkits; MCP speaks to LLM hosts and agents. An agent can enumerate your capabilities, plan a sequence of calls, validate parameters against your schema, and execute with guardrails, all without a bespoke plugin for every vendor. If WSDL’s spiritual cousin was the IDE, MCP’s is the Language Server Protocol, simple capabilities presented to a smart client that decides when and how to use them.
Why does this matter now? Because agents are only as good as the tools they can reach and the constraints they can respect. MCP gives teams a portable, vendor-neutral way to expose capability while keeping users in the loop. Risky operations can be gated by host-mediated consent rather than buried in policy markup. Inputs and outputs can be strictly typed so models don’t hallucinate the shape of your API. And because it’s just JSON-RPC, you can inspect traffic with a human eye and debug without summoning a SOAP trace viewer from 2003.
Security isn’t a side quest here, and may be a risk to taking MCP in the same direction that WSDL went. Simon Willison argues that MCP’s mix‑and‑match tooling makes it dangerously easy to assemble a “lethal trifecta”: agents that simultaneously have access to private data, process untrusted content, and can communicate externally, creating clean exfiltration paths via prompt injection. He lays out the pattern in Model Context Protocol has prompt injection security problems, codifies it in The lethal trifecta for AI agents, and reiterates the risk in his recent Bay Area AI Security Meetup talk: if even one of those legs stays in play, you may still leak, and pushing the decision onto end users isn’t a real mitigation. The practical takeaway: design MCP servers so at least one leg is impossible—strip network egress, avoid untrusted inputs, or confine tools to read‑only scopes—and assume prompt injection attempts are routine, not exceptional.
If you already run REST, gRPC, or GraphQL, the on-ramp is short. Take the operations your product actually needs agents to perform, wrap them as tools with tight JSON Schemas, and publish read-only context—think schemas, templates, sitemaps, policy docs—as resources that the host can cache and cite. For anything with side effects, declare explicit scopes and require confirmation so the host can put a human in the loop. Treat your schemas like you would a public API: version the shapes, document intent and constraints, and design for idempotency so retries don’t torch data. You’re not rebuilding UDDI; you’re letting clients discover what’s relevant in the moment they connect.
WSDL’s core insight: capability contracts enable generic clients — was right. MCP resurrects that idea for the LLM era with better ergonomics, clearer trust boundaries, and a runtime discovery model that fits how agents actually work. You get the magic of “describe once, integrate everywhere,” minus the WS-* hangover. Ship an MCP server alongside your existing APIs, and let today’s clients - the models - do what yesterday’s toolchains never quite pulled off.