MCP Is the USB of AI, and Why That Matters
You picked your AI coding assistant in 2023 based on what worked best that month. In 2024 the model changed. In 2025 the pricing doubled. In 2026 the terms of service changed what you could do with code the assistant touched. Each time, your team's workflow absorbed the shift. The lesson: you had committed to a vendor, and vendors move.
This post is about the Model Context Protocol (MCP) and why it matters before you bet another workflow on a single assistant.
The USB analogy
Before USB, every device came with its own connector. Printer cable. Mouse cable. Keyboard cable. Serial port for the modem. You had a drawer full of dongles and a laptop with seven different ports, and each combination only worked if the manufacturer decided to support it. USB won because it standardized the shape of the connection. The peripherals and the computer could now be chosen independently.
MCP is the USB of AI. Before it, every assistant had its own plugin format, its own custom tools API, its own context layer. Switch assistants and your workflow broke. Build an integration, and you built it three times (once per platform) or you picked one platform and stayed there.
MCP standardized the plug. The assistant sits on one side. The tools, data, and memory sit on the other. The same server works with Claude, ChatGPT, Gemini, Cursor, and anything else that speaks the protocol.
What Crow does
Crow ships six MCP servers out of the box: memory, projects, sharing, storage, blog, orchestration. The gateway also hosts a router (at /router/mcp) that collapses 49 underlying tools into seven category tools, which keeps context windows focused even in long sessions. External MCP servers plug in through the same gateway: Obsidian, Home Assistant, GitHub, and fifty-plus bundled integrations.
On the AI side, Crow's chat gateway is BYOAI. Adapters cover OpenAI (and every OpenAI-compatible provider: OpenRouter, Together, Groq, Fireworks), Anthropic, Google Gemini, and Ollama natively. You bring keys; Crow routes the call. Switch providers per conversation, per agent, per pipeline.
Here is what that lets you do:
- Run memory through Anthropic today because you prefer its synthesis.
- Switch to Google tomorrow because you got access to a longer context window.
- Run a local Ollama model for sensitive projects on the same box.
- Keep every memory, every project, every note. The data never moves.
The assistant is a commodity. The memory, the projects, the integrations, the workflow: all of it survives any single vendor.
Why this matters for builders
If you are building an AI-powered internal tool at work, the question to ask before you pick a framework is: "what does this cost me when the model I bet on changes?" With MCP, the answer is "almost nothing." You swap the adapter; the integration itself keeps working. Your tools keep working, your data keeps its shape, your users do not notice.
Tradeoffs, honestly
MCP is young. The spec evolves. Older third-party MCP servers sometimes need a version bump when the protocol ships a new capability. Expect occasional friction; the remedy is usually a single npm update.
Second tradeoff: not every assistant supports MCP with equal fidelity yet. Claude and the MCP Inspector are the reference implementations. ChatGPT supports it in the desktop app with permissions you manage per server. Gemini and Cursor are catching up. The protocol wins by being embraced; it is being embraced, and the edge cases exist.
Third: more standards means more surface area. Crow's answer is to ship a router that hides the surface from you when you do not need it, and a discovery tool when you do. Adjust the knob to match the session.
Start here
Build one integration. A ten-minute scaffold plus a one-tool server plus a connection to Claude Desktop will teach you the protocol better than any doc: getting started with Crow.
Next post in this series: build a Crow bundle in an afternoon.