Jussi Hallila
Part 3 of a 3-part series exploring how MCP revolutionizes AI integration
The difference between an MCP tool that technically works and one that AI models can use effectively often comes down to design decisions that seem trivial at first glance. In this post, we'll explore the specific strategies that transform frustrating AI interactions into smooth, productive workflows.
After testing dozens of approaches, the most effective tool descriptions follow a simple XML structure that gives AI models exactly what they need, when they need it:
<usecase>
Explains WHAT the tool does and WHEN to use it
</usecase>
<instructions>
Covers HOW to use it correctly, with specifics about parameters and expected inputs
</instructions>
This separation helps models quickly determine relevance before diving into implementation details. Here's why this works:
Bad Description (Wall of Text):
"This tool searches for team members in a workspace. It requires a workspace_id parameter which should be a string containing the unique identifier for the workspace. You can also optionally filter by role using the role parameter which accepts 'admin', 'member', or 'guest'. Results are paginated with a default limit of 20. The tool returns member details including name, email, join date, and activity status. Use this when you need to find specific team members or get an overview of workspace membership."
Good Description (Structured):
<usecase>
Finds team members in a specific workspace. Perfect for getting member lists, checking who has access, or finding people by role.
</usecase>
<instructions>
workspace_id: Use the ID from findWorkspaces() results
role: Optional filter - 'admin', 'member', or 'guest'
Returns up to 20 members per call with pagination support
</instructions>
The structured version is easier to scan and understand quickly.
Here's a counterintuitive principle: you don't always need exhaustive descriptions. If your error handling can guide the model to correct usage 90% of the time, lean on that instead of bloating your descriptions.
Overly Detailed Approach:
<usecase>
Uploads files to a project folder
</usecase>
<instructions>
file_path: Local file path (must exist, supports .pdf, .doc, .txt, .md, .jpg, .png, max 10MB)
project_id: Project identifier (get from listProjects, must be valid UUID format)
folder_name: Optional folder name (alphanumeric plus hyphens/underscores only, max 50 chars)
description: Optional file description (max 200 characters)
overwrite: Boolean, defaults to false (if true, replaces existing file with same name)
</instructions>
90/10 Approach:
<usecase>
Uploads files to project folders. Supports documents and images.
</usecase>
<instructions>
file_path: Path to the file you want to upload
project_id: Project ID (use listProjects if you need to find it)
folder_name: Optional folder to organize the file
</instructions>
Then handle edge cases with helpful errors:
{
"error": "File size exceeds 10MB limit",
"current_size": "15.2MB",
"suggestion": "Compress the file or use uploadLargeFile() for files over 10MB"
}
This approach keeps descriptions focused while still providing guidance when things go wrong.
The most important insight for MCP design is this: every response from your tool is an opportunity to guide the model's next action. In these cases we want to return guidance to the model.
{
"projects": [
{"id": "proj_123", "name": "Mobile App", "status": "active"},
{"id": "proj_456", "name": "Web Portal", "status": "planning"}
]
}
{
"projects": [
{"id": "proj_123", "name": "Mobile App", "status": "active", "member_count": 12},
{"id": "proj_456", "name": "Web Portal", "status": "planning", "member_count": 5}
],
"total_found": 2,
"message": "Found 2 projects. Use getProjectDetails(project_id) for full information or getProjectMembers(project_id) to see team composition."
}
The second response teaches the model what it can do next.
When converting APIs to MCP tools, the biggest challenge is determining the right level of consolidation. Too granular, and you overwhelm the model with choices. Too consolidated, and you create confusing mega-tools.
Original API (12 endpoints):
/tasks/create
/tasks/list
/tasks/get/{id}
/tasks/update/{id}
/tasks/delete/{id}
/tasks/assign/{id}
/tasks/complete/{id}
/comments/add
/comments/list/{task_id}
/files/attach/{task_id}
/files/list/{task_id}
/notifications/send
Poor Consolidation (1 mega-tool):
taskManager(action, task_id, title, description, assignee, due_date, comment_text, file_path, notification_type, ...)
This creates a confusing tool with too many optional parameters.
Good Consolidation (4 focused tools):
Each tool has a clear purpose and manageable parameter set.
Always separate potentially destructive operations. You might want users to approve deletions but allow free use of read operations. This balance gives you control while maintaining productivity.
Let's see these principles in action with a customer support platform that originally had 45 API endpoints.
Users want to ask things like: "Find all high-priority tickets from VIP customers that have been open for more than 48 hours and haven't received an engineering response."
The model would need to:
That's 6-8 API calls with complex logic between each step.
Tool 1: findCriticalTickets
<usecase>
Finds tickets that need urgent attention based on priority, customer status, response time, and other factors.
Perfect for daily triage and identifying overlooked issues.
</usecase>
<instructions>
filters: Combine criteria like 'high_priority', 'vip_customers', 'no_engineering_response'
time_threshold: How long tickets have been open (e.g., '48h', '3d', '1w')
Returns detailed results with customer context and response history
</instructions>
Sample Response:
{
"critical_tickets": [
{
"id": "TICK-1234",
"title": "Payment processing fails on mobile",
"priority": "high",
"customer": "Acme Corp (VIP)",
"open_duration": "52 hours",
"last_response": "Customer service, 6 hours ago",
"engineering_involvement": false,
"urgency_score": 95
}
],
"summary": "Found 3 critical tickets requiring engineering attention",
"message": "Use assignTicket() to route these to engineering or escalateTicket() for immediate attention.",
"suggested_actions": [
"Review tickets with urgency_score > 90 first",
"Consider escalating tickets open > 72 hours"
]
}
Tool 2: ticketActions
<usecase>
Take actions on tickets: assign, escalate, add internal notes, or update status.
Use after identifying tickets that need attention.
</usecase>
<instructions>
ticket_id: ID from findCriticalTickets or other search results
action: 'assign', 'escalate', 'add_note', 'update_status'
details: Assignment target, escalation reason, note content, or new status
</instructions>
What previously required 6-8 API calls with complex logic becomes a 2-tool interaction:
The AI model can handle this workflow reliably because each step provides complete information and clear guidance for the next step.
The worst error messages are those that don't help the model correct its behavior. Structure your errors to include:
{
"error": "Invalid request",
"code": 400
}
{
"error": "Missing required customer_id parameter",
"context": "Customer lookup requires either customer_id or email address",
"suggestion": "Use findCustomers() to search by name, or provide email if known",
"example": "findCustomers('Acme Corp') then use the returned customer_id"
}
The second approach teaches the model how to succeed rather than just reporting failure.
customer_email
not email
Instead of requiring exact matches, support flexible input:
// Rigid approach
priority: "must be exactly 'high', 'medium', or 'low'"
// Flexible approach
priority: "priority level - 'high', 'urgent', 'critical' for high priority; 'normal', 'medium' for standard; 'low' for minor issues"
Always include enough information for the next logical step:
{
"customer": {
"id": "cust_789",
"name": "Acme Corp",
"tier": "enterprise",
"open_tickets": 3,
"account_manager": "Sarah Chen"
},
"message": "Customer found. Use getCustomerTickets(customer_id) to see open issues or createTicket(customer_id) to file a new one."
}
The best way to validate your MCP tool design is to think through realistic user scenarios:
If any answer is "no," revisit your consolidation strategy and response design.
We've covered some various aspects of the fundamental principles of Model Context Protocol. We've discussed designing tools that AI can actually use effectively. By now, you should have a solid foundation for creating MCP implementations that are genuinely useful in production environments.
Remember the core principles we've explored: