Understanding the Model Context Protocol 3: The Design Patterns That Make AI Understand

Jussi Hallila

Designing Tools AI Can Actually Use: From APIs to Intelligent Interfaces

Part 3 of a 3-part series exploring how MCP revolutionizes AI integration

The difference between an MCP tool that technically works and one that AI models can use effectively often comes down to design decisions that seem trivial at first glance. In this post, we'll explore the specific strategies that transform frustrating AI interactions into smooth, productive workflows.

The Art of Tool Descriptions

The XML Structure That Works

After testing dozens of approaches, the most effective tool descriptions follow a simple XML structure that gives AI models exactly what they need, when they need it:

<usecase>
Explains WHAT the tool does and WHEN to use it
</usecase>
<instructions>  
Covers HOW to use it correctly, with specifics about parameters and expected inputs
</instructions>

This separation helps models quickly determine relevance before diving into implementation details. Here's why this works:

Bad Description (Wall of Text):

"This tool searches for team members in a workspace. It requires a workspace_id parameter which should be a string containing the unique identifier for the workspace. You can also optionally filter by role using the role parameter which accepts 'admin', 'member', or 'guest'. Results are paginated with a default limit of 20. The tool returns member details including name, email, join date, and activity status. Use this when you need to find specific team members or get an overview of workspace membership."

Good Description (Structured):

<usecase>
Finds team members in a specific workspace. Perfect for getting member lists, checking who has access, or finding people by role.
</usecase>
<instructions>
workspace_id: Use the ID from findWorkspaces() results
role: Optional filter - 'admin', 'member', or 'guest'  
Returns up to 20 members per call with pagination support
</instructions>

The structured version is easier to scan and understand quickly.

The 90/10 Rule: When Less Description is More

Here's a counterintuitive principle: you don't always need exhaustive descriptions. If your error handling can guide the model to correct usage 90% of the time, lean on that instead of bloating your descriptions.

Example: The File Upload Tool

Overly Detailed Approach:

<usecase>
Uploads files to a project folder
</usecase>
<instructions>
file_path: Local file path (must exist, supports .pdf, .doc, .txt, .md, .jpg, .png, max 10MB)
project_id: Project identifier (get from listProjects, must be valid UUID format)
folder_name: Optional folder name (alphanumeric plus hyphens/underscores only, max 50 chars)
description: Optional file description (max 200 characters)
overwrite: Boolean, defaults to false (if true, replaces existing file with same name)
</instructions>

90/10 Approach:

<usecase>
Uploads files to project folders. Supports documents and images.
</usecase>
<instructions>
file_path: Path to the file you want to upload
project_id: Project ID (use listProjects if you need to find it)
folder_name: Optional folder to organize the file
</instructions>

Then handle edge cases with helpful errors:

{
  "error": "File size exceeds 10MB limit",
  "current_size": "15.2MB", 
  "suggestion": "Compress the file or use uploadLargeFile() for files over 10MB"
}

This approach keeps descriptions focused while still providing guidance when things go wrong.

Response Design: Every Answer is a Prompt

The most important insight for MCP design is this: every response from your tool is an opportunity to guide the model's next action. In these cases we want to return guidance to the model.

Poor Response Design

{
  "projects": [
    {"id": "proj_123", "name": "Mobile App", "status": "active"},
    {"id": "proj_456", "name": "Web Portal", "status": "planning"}
  ]
}

Excellent Response Design

{
  "projects": [
    {"id": "proj_123", "name": "Mobile App", "status": "active", "member_count": 12},
    {"id": "proj_456", "name": "Web Portal", "status": "planning", "member_count": 5}
  ],
  "total_found": 2,
  "message": "Found 2 projects. Use getProjectDetails(project_id) for full information or getProjectMembers(project_id) to see team composition."
}

The second response teaches the model what it can do next.

Consolidation Strategy: Finding the Sweet Spot

When converting APIs to MCP tools, the biggest challenge is determining the right level of consolidation. Too granular, and you overwhelm the model with choices. Too consolidated, and you create confusing mega-tools.

Example: Task Management System

Original API (12 endpoints):

  • /tasks/create
  • /tasks/list
  • /tasks/get/{id}
  • /tasks/update/{id}
  • /tasks/delete/{id}
  • /tasks/assign/{id}
  • /tasks/complete/{id}
  • /comments/add
  • /comments/list/{task_id}
  • /files/attach/{task_id}
  • /files/list/{task_id}
  • /notifications/send

Poor Consolidation (1 mega-tool):

taskManager(action, task_id, title, description, assignee, due_date, comment_text, file_path, notification_type, ...)

This creates a confusing tool with too many optional parameters.

Good Consolidation (4 focused tools):

  1. manageTask - Create, update, complete tasks
  2. findTasks - Search and list tasks with filters
  3. taskCollaboration - Add comments, attach files
  4. deleteTask - Separate for safety (requires confirmation)

Each tool has a clear purpose and manageable parameter set.

The Safety Principle

Always separate potentially destructive operations. You might want users to approve deletions but allow free use of read operations. This balance gives you control while maintaining productivity.

Real-World Example: Customer Support System

Let's see these principles in action with a customer support platform that originally had 45 API endpoints.

The Challenge

Users want to ask things like: "Find all high-priority tickets from VIP customers that have been open for more than 48 hours and haven't received an engineering response."

Traditional API Approach (Problems)

The model would need to:

  1. List all tickets (hoping pagination doesn't cut off results)
  2. Filter by priority level
  3. Check customer VIP status for each ticket
  4. Calculate time since creation
  5. Check response history for engineering involvement
  6. Manually correlate all this data

That's 6-8 API calls with complex logic between each step.

MCP Tool Approach (Solution)

Tool 1: findCriticalTickets

<usecase>
Finds tickets that need urgent attention based on priority, customer status, response time, and other factors.
Perfect for daily triage and identifying overlooked issues.
</usecase>
<instructions>
filters: Combine criteria like 'high_priority', 'vip_customers', 'no_engineering_response'
time_threshold: How long tickets have been open (e.g., '48h', '3d', '1w')
Returns detailed results with customer context and response history
</instructions>

Sample Response:

{
  "critical_tickets": [
    {
      "id": "TICK-1234",
      "title": "Payment processing fails on mobile",
      "priority": "high", 
      "customer": "Acme Corp (VIP)",
      "open_duration": "52 hours",
      "last_response": "Customer service, 6 hours ago",
      "engineering_involvement": false,
      "urgency_score": 95
    }
  ],
  "summary": "Found 3 critical tickets requiring engineering attention",
  "message": "Use assignTicket() to route these to engineering or escalateTicket() for immediate attention.",
  "suggested_actions": [
    "Review tickets with urgency_score > 90 first",
    "Consider escalating tickets open > 72 hours"
  ]
}

Tool 2: ticketActions

<usecase>
Take actions on tickets: assign, escalate, add internal notes, or update status.
Use after identifying tickets that need attention.
</usecase>
<instructions>
ticket_id: ID from findCriticalTickets or other search results
action: 'assign', 'escalate', 'add_note', 'update_status'
details: Assignment target, escalation reason, note content, or new status
</instructions>

The Result

What previously required 6-8 API calls with complex logic becomes a 2-tool interaction:

  1. Find the problematic tickets with all context included
  2. Take action based on the structured results

The AI model can handle this workflow reliably because each step provides complete information and clear guidance for the next step.

Error Handling That Actually Helps

The worst error messages are those that don't help the model correct its behavior. Structure your errors to include:

  1. What went wrong (clear and specific)
  2. Why it went wrong (brief context)
  3. What to do instead (actionable suggestion)
  4. Example of correct usage (when helpful)

Poor Error Handling

{
  "error": "Invalid request",
  "code": 400
}

Excellent Error Handling

{
  "error": "Missing required customer_id parameter",
  "context": "Customer lookup requires either customer_id or email address",
  "suggestion": "Use findCustomers() to search by name, or provide email if known",
  "example": "findCustomers('Acme Corp') then use the returned customer_id"
}

The second approach teaches the model how to succeed rather than just reporting failure.

Guidelines for Tool Parameter Design

Keep Parameters Meaningful

  • Use clear, descriptive names: customer_email not email
  • Provide sensible defaults when possible
  • Make required vs. optional parameters obvious

Support Natural Language Inputs

Instead of requiring exact matches, support flexible input:

// Rigid approach
priority: "must be exactly 'high', 'medium', or 'low'"

// Flexible approach  
priority: "priority level - 'high', 'urgent', 'critical' for high priority; 'normal', 'medium' for standard; 'low' for minor issues"

Provide Context in Responses

Always include enough information for the next logical step:

{
  "customer": {
    "id": "cust_789",
    "name": "Acme Corp", 
    "tier": "enterprise",
    "open_tickets": 3,
    "account_manager": "Sarah Chen"
  },
  "message": "Customer found. Use getCustomerTickets(customer_id) to see open issues or createTicket(customer_id) to file a new one."
}

Testing Your Tool Design

The best way to validate your MCP tool design is to think through realistic user scenarios:

  1. Can the AI complete the task in 2-3 tool calls?
  2. Are error messages helpful enough to guide correction?
  3. Does each response provide clear next steps?
  4. Would a human find the workflow logical and efficient?

If any answer is "no," revisit your consolidation strategy and response design.


Wrapping Up: Your MCP Journey Starts Now

We've covered some various aspects of the fundamental principles of Model Context Protocol. We've discussed designing tools that AI can actually use effectively. By now, you should have a solid foundation for creating MCP implementations that are genuinely useful in production environments.

The Key Takeaways

Remember the core principles we've explored:

  • Context is everything: Well-designed tool descriptions and structured responses make the difference between frustration and productivity
  • Less can be more: The 90/10 rule shows us that perfect documentation isn't always necessary and sometimes good error handling is better
  • Think like an AI: Every response is an opportunity to guide the next action
  • Consolidate thoughtfully: Find the sweet spot between too many granular tools and confusing mega-tools