Quick take
MCP is a real protocol that solves a real problem: the N-times-M integration matrix between AI clients and tool servers. I built one in Go. The protocol layer is clean. The hard parts are still auth, permissions, and not handing the model a footgun. If you’re building tool-heavy AI systems, MCP is worth investing in now.
I’ve been building tool integrations for AI systems since early 2024. Every project, the same pattern: custom connector, custom auth wrapper, custom request/response format, custom error handling. Multiply that by every tool and every AI provider and you get an integration matrix that grows quadratically. It’s the microservices API sprawl problem all over again.
MCP – Model Context Protocol – is Anthropic’s answer: a standard protocol for connecting AI models to external tools and data sources. Instead of N clients times M tools worth of custom integrations, you get N clients and M servers all speaking the same language.
I spent the last few weeks building an MCP server in Go to see whether the protocol lives up to the pitch. Here’s what stood out.
What MCP actually is
Strip away the marketing and MCP is a JSON-RPC-based protocol with three core concepts:
Tools. Functions the model can call. Each tool has a name, a description, and a JSON Schema for its inputs. The model decides when to call a tool based on the description.
Resources. Data the model can read. Think files, database records, API responses. Resources have URIs and can be listed or read by the client.
Prompts. Reusable prompt templates that servers can expose. Less interesting for most production use cases, but useful for standardizing common interactions.
The transport layer is deliberately simple: stdio for local servers, HTTP with SSE for remote ones. The protocol handles capability negotiation, so a client can discover what a server offers at connection time.
Building an MCP server in Go
Here’s a minimal MCP tool server that wraps a database query. This is roughly what I built for an internal tool in a recent project that lets the AI assistant query deployment status.
package main
import (
"context"
"encoding/json"
"fmt"
"log"
"github.com/mark3labs/mcp-go/mcp"
"github.com/mark3labs/mcp-go/server"
)
type DeploymentStatus struct {
Service string `json:"service"`
Version string `json:"version"`
Environment string `json:"environment"`
Status string `json:"status"`
DeployedAt string `json:"deployed_at"`
}
func main() {
s := server.NewMCPServer(
"deployment-status",
"1.0.0",
server.WithToolCapabilities(true),
)
tool := mcp.NewTool("get_deployment_status",
mcp.WithDescription("Get the current deployment status for a service in a given environment"),
mcp.WithString("service", mcp.Required(), mcp.Description("Service name")),
mcp.WithString("environment", mcp.Required(), mcp.Description("Target environment: staging or production")),
)
s.AddTool(tool, handleGetDeploymentStatus)
if err := server.ServeStdio(s); err != nil {
log.Fatalf("server failed: %v", err)
}
}
func handleGetDeploymentStatus(ctx context.Context, req mcp.CallToolRequest) (*mcp.CallToolResult, error) {
service, _ := req.Params.Arguments["service"].(string)
env, _ := req.Params.Arguments["environment"].(string)
if env != "staging" && env != "production" {
return mcp.NewToolResultError("environment must be 'staging' or 'production'"), nil
}
status, err := queryDeploymentDB(ctx, service, env)
if err != nil {
return mcp.NewToolResultError(fmt.Sprintf("query failed: %v", err)), nil
}
data, _ := json.Marshal(status)
return mcp.NewToolResultText(string(data)), nil
}
A few things to note. The tool definition includes a JSON Schema for inputs, which means the client can validate before calling. The handler returns structured results or errors. The server handles all the JSON-RPC plumbing – capability negotiation, method routing, error formatting. You just write the handler.
This is roughly 50 lines of actual logic. The equivalent custom integration I had before was about 200 lines, with its own HTTP server, auth middleware, and request parsing. That reduction matters when you have 15 tools to wrap.
Adding auth and permissions
The protocol itself doesn’t define authentication. That’s intentional – different deployments have different auth requirements. But it means you have to solve it yourself, and this is where most teams will spend their time.
Here’s the pattern I use: a middleware wrapper that checks permissions before the tool handler runs.
type PermissionChecker struct {
allowedTools map[string][]string // tool -> allowed roles
}
func (pc *PermissionChecker) Wrap(toolName string, handler server.ToolHandlerFunc) server.ToolHandlerFunc {
return func(ctx context.Context, req mcp.CallToolRequest) (*mcp.CallToolResult, error) {
user := userFromContext(ctx)
if user == nil {
return mcp.NewToolResultError("authentication required"), nil
}
allowed := pc.allowedTools[toolName]
if !hasAnyRole(user, allowed) {
log.Printf("DENIED: user=%s tool=%s roles=%v", user.ID, toolName, user.Roles)
return mcp.NewToolResultError("permission denied"), nil
}
log.Printf("ALLOWED: user=%s tool=%s", user.ID, toolName)
return handler(ctx, req)
}
}
Every tool call gets logged with the user identity, whether it was allowed or denied, and the arguments (redacted where necessary). This isn’t optional. If an AI system can call tools that read your database or modify your infrastructure, you need an audit trail.
For remote MCP servers over HTTP, I add standard bearer token auth at the transport layer. For local stdio servers, the auth context comes from the parent process. Either way, the permission check happens at the tool level, not just at the connection level. A user might be allowed to read deployment status but not trigger a rollback.
The security conversation
This is the part that keeps me up at night. MCP makes it easy to give an AI model access to tools. Maybe too easy. The protocol doesn’t enforce:
- Read vs. write separation. A tool that reads data and a tool that deletes data look the same to the protocol. You have to enforce the distinction.
- Rate limiting. Nothing stops the model from calling a tool a thousand times in a loop. Build your own limits.
- Input sanitization. The model generates the tool arguments. If those arguments end up in a SQL query or a shell command, you’re one prompt injection away from a bad day.
- Blast radius. A tool that queries one record is different from a tool that dumps an entire table. Scope your tools narrowly.
I enforce a simple rule: every tool that can write or modify gets a confirmation step that goes back to the user. The model can propose the action, but a human approves it. For read-only tools, I still scope the query to the current user’s data and add rate limits.
func handleTriggerRollback(ctx context.Context, req mcp.CallToolRequest) (*mcp.CallToolResult, error) {
service, _ := req.Params.Arguments["service"].(string)
env, _ := req.Params.Arguments["environment"].(string)
// Never auto-execute destructive actions
return mcp.NewToolResultText(fmt.Sprintf(
"CONFIRMATION REQUIRED: Roll back %s in %s to previous version? "+
"This action requires human approval.",
service, env,
)), nil
}
This is the same principle from my NATO cyber defense days: least privilege, explicit authorization, and comprehensive auditing. The fact that the agent is an AI model doesn’t change the security model. If anything, it makes it more important, because the model can be manipulated through prompt injection in ways a human user can’t.
Where MCP shines
Tool portability. I built the deployment status server once. It works with Claude, with our internal assistant, and with any future client that speaks MCP. That’s the whole pitch, and it delivers.
Discovery. A client can connect to a server and ask “what can you do?” The response is machine-readable and includes schemas. This means the AI model gets accurate tool descriptions automatically instead of relying on hardcoded prompts.
Composability. An AI client can connect to multiple MCP servers simultaneously. One for deployments, one for monitoring, one for documentation. Each server is independently deployable and testable. This is the microservices pattern applied to AI tool access, with the same benefits and the same risks.
Where it doesn’t
No standard auth. Every deployment rolls its own. This will improve, but right now it’s extra work.
Ecosystem maturity. The Go ecosystem is solid thanks to mcp-go, but tooling for testing, debugging, and monitoring MCP interactions is still young. I wrote my own trace logger.
Complexity budget. MCP is one more protocol layer to understand, debug, and operate. For a team with two tools, the overhead might not be worth it. For a team with ten tools across multiple AI clients, it pays for itself quickly.
Should you adopt it now
If you’re building AI systems that call tools – and increasingly, every AI system does – start with one server. Pick your simplest, most-used tool. Wrap it in MCP. Test it against a real client. Measure the integration effort against your current custom approach.
From what I’ve seen, MCP cut tool integration time roughly in half and made our tools testable in isolation for the first time. The security work is the same either way – you have to solve auth and permissions regardless of protocol. MCP just standardizes everything else.
The protocol is real. The ecosystem is growing. The hard problems are still hard. But the easy problems – discovery, invocation, transport – are solved. That’s enough to make it worth building on.