The Model Context Protocol (MCP) is Anthropic’s new standard for connecting AI models to external systems and data sources. It allows Claude and other LLMs to interact with databases, APIs, file systems, and other tools during conversations.
Most companies have taken the fastest path to market: wrap their existing API, slap an MCP server on top, and ship it. The result? MCPs that barely work and miss the entire point of what they’re supposed to accomplish.
Marketing teams get excited about “AI integration,” engineering teams check the box on their roadmap, and everyone assumes they’re ready for the AI-powered future. But this approach makes three critical mistakes:
Let’s break down why each of these matters and how to build MCPs that actually work.
Picture this: You’re deep in code, debugging a nasty issue, when you realize you need to file a ticket for a missing dependency. You tell your AI agent: “Create a ticket and assign it to John from the platform team.”
Simple request, right? In the GitHub UI, this is straightforward: click “New Issue,” select assignee from a dropdown, type your description, hit submit. The interface handles all the ID lookups and relationships behind the scenes.
Now you try the same thing with GitHub’s MCP and everything falls apart.
Why? Because they built it like every other API, expecting clean, structured input with exact field names and UUIDs. GitHub’s create_ticket
endpoint wants a user UUID, not “John from the platform team.” So now the AI agent has to play detective: search through users, match names, hope John isn’t ambiguous, extract the UUID, then retry the original call.
The problem is that the LLM is now the user, and LLMs are a completely different type of user than traditional software.
Traditional APIs serve deterministic software that follows predictable paths. Click button, trigger sequence, get result. Every time.
MCPs serve language models that think in conversations, work with ambiguous input, and need to translate human intent into structured actions. Language models interpret meaning from text.
When you treat these two users the same, you get MCPs that work in demos and fail in real usage. The LLM gets stuck, users get frustrated, and your “AI integration” becomes another abandoned project.
Here’s where most API wrapper MCPs fail spectacularly: error handling.
Traditional APIs return error codes designed for software engineers debugging their applications. “Error 533: Invalid parameter.” “404 Not Found.” “400 Bad Request.” These work fine when a developer can check the documentation, understand the context, and fix their code.
But when an AI agent hits these same errors? Dead end.
Let’s go back to our GitHub ticket example. You ask your AI agent to “assign this ticket to John,” but John’s username doesn’t exist in the system. GitHub’s API returns a 404 error. Now the AI agent is stuck trying to figure out what went wrong and how to fix it.
A proper MCP would return something like: “User ‘John’ not found. Call the users endpoint first to get the correct UUID, then retry this request with the UUID instead of the username.”
See the difference? One error message leaves the AI agent guessing. The other gives the AI agent a clear path forward. Though this is still only a half solution - we’ll explore how it should really work in the next section.
Most companies just forward their existing API error responses through their MCP. They assume AI agents can interpret cryptic error codes and generic HTTP status messages the same way a human developer would. The result? Failed requests, abandoned workflows, and MCPs that feel unreliable.
Watch an AI agent try to create a GitHub ticket using their current MCP. First, it calls the projects endpoint to find your project ID. Then it calls the users endpoint to get all team members. Next, it searches through the user list to find “John.” Finally, it makes the create ticket call with the proper UUIDs.
Four separate MCP calls for what takes one button click in the GitHub UI.
This is why people think AI agents are slow and stupid. We’re watching AI agents fumble through the same multi-step dance a human would do manually, except the AI agent has to make each step explicit and wait for responses.
The real problem is MCP design. When you wrap individual API endpoints, you force the LLM to become a systems integrator instead of focusing on the actual task.
Better MCP design starts with understanding user intent. What does someone actually want to do? “Create a ticket and assign it to John.” That’s one intent, so it should be one MCP call.
A well-designed MCP endpoint would handle the complexity internally. You tell it “assign this ticket to John,” and the MCP server translates “John” into the correct UUID, looks up the project context, and creates the ticket. If there are multiple Johns, it returns: “Found John Doe and John Webb. Retry with the full name to create the ticket.”
This approach solves multiple problems at once. The AI agent makes one call instead of four. The workflow feels natural instead of clunky. More importantly, you avoid massive token waste and context pollution.
Those four API calls return enormous amounts of unnecessary data. The users endpoint dumps every team member into the context. The projects endpoint lists every repository. The AI agent now has hundreds of lines of irrelevant information cluttering its working memory, making every subsequent decision harder and slower.
The solution isn’t complex, but it requires rethinking your approach entirely.
Start with user workflows, not API endpoints. Map out what people actually want to accomplish with your service. “Create and assign a ticket.” “Schedule a meeting with the team.” “Update project status.” These are your MCP endpoints.
Build intelligence into your MCP server. When someone says “assign to John,” your MCP should handle the name resolution, deal with ambiguity, and provide clear feedback. When someone says “schedule for next Tuesday,” your MCP should understand context and handle the date conversion.
Design responses for language models, not humans. Return structured information that AI agents can easily parse and act on. Skip the unnecessary data. Focus on what the LLM needs to complete the task or move forward.
Handle errors like a helpful colleague. When something goes wrong, explain what happened and suggest next steps. Make it easy for AI agents to recover and continue the workflow.
The companies getting this right are building MCPs that feel magical. One request accomplishes what used to take multiple API calls. Errors are rare and recoverable. The LLM spends time on the actual task instead of system integration busywork.
Your users will notice the difference immediately. Instead of watching AI agents struggle through multi-step workflows, they see smooth, intelligent automation that actually saves time and reduces friction.
We’ve applied these principles in our own Raindrop MCP’s application development workflow.
Instead of exposing individual API endpoints, we created a state machine that guides AI agents through the entire process of building and deploying production applications. Each MCP endpoint returns detailed instructions telling the AI agent exactly what to do next, where it needs user input, and how to proceed.
Here’s how it works: You tell Claude Code “I want to build an MCP server.” Claude calls our login endpoint and receives structured guidance for the next steps. After login, it knows to either set up a team or start with the product requirements document. Each response includes a next_action
field that explicitly tells Claude which MCP endpoint to call next.
The flow breaks down like this: Product requirements → Database schema generation → Documentation retrieval → Code generation → Testing → Deployment → Endpoint testing. Each step provides Claude with everything it needs to succeed, plus clear instructions for moving forward.
The magic happens in the MCP responses. Instead of returning raw API data, each endpoint returns comprehensive prompts that include:
This approach lets us build entire application backends through conversation. Claude guides users through requirements gathering, generates all the code, sets up GitHub repositories, deploys to production, runs tests, fixes bugs, and hands back a working MCP server endpoint.
The whole process feels like working with a senior developer who knows exactly what needs to happen next, rather than fumbling through API documentation trying to figure out the right sequence of calls.