DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Last call! Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • Automatic Code Transformation With OpenRewrite
  • A Complete Guide to Modern AI Developer Tools
  • Evolution of Cloud Services for MCP/A2A Protocols in AI Agents
  • Building a Simple Todo App With Model Context Protocol (MCP)

Trending

  • Apache Doris vs Elasticsearch: An In-Depth Comparative Analysis
  • A Guide to Developing Large Language Models Part 1: Pretraining
  • Stateless vs Stateful Stream Processing With Kafka Streams and Apache Flink
  • Intro to RAG: Foundations of Retrieval Augmented Generation, Part 1
  1. DZone
  2. Coding
  3. Tools
  4. Building Custom Tools With Model Context Protocol

Building Custom Tools With Model Context Protocol

Learn how to build MCP servers to extend AI capabilities. Create tools that AI models can seamlessly integrate, demonstrated through an arXiv paper search implementation.

By 
Aditya Karnam Gururaj Rao user avatar
Aditya Karnam Gururaj Rao
·
Updated by 
Arjun Jaggi user avatar
Arjun Jaggi
·
Jan. 31, 25 · Tutorial
Likes (1)
Comment
Save
Tweet
Share
5.5K Views

Join the DZone community and get the full member experience.

Join For Free

Model Context Protocol (MCP) is becoming increasingly important in the AI development landscape, enabling seamless integration between AI models and external tools. In this guide, we'll explore how to create an MCP server that enhances AI capabilities through custom tool implementations.

What Is Model Context Protocol?

MCP is a protocol that allows AI models to interact with external tools and services in a standardized way. It enables AI assistants like Claude to execute custom functions, process data, and interact with external services while maintaining a consistent interface.

Model Context Protocol

Getting Started With MCP Server Development

To begin creating an MCP server, you'll need a basic understanding of Python and async programming. Let's walk through the process of setting up and implementing a custom MCP server.

Setting Up Your Project

The easiest way to start is by using the official MCP server creation tool. You have two options:

Plain Text
 
# Using uvx (recommended)
uvx create-mcp-server

# Or using pip
pip install create-mcp-server
create-mcp-server


This creates a basic project structure:

Plain Text
 
my-server/
├── README.md
├── pyproject.toml
└── src/
    └── my_server/
        ├── __init__.py
        ├── __main__.py
        └── server.py


Implementing Your First MCP Server

Let's create a practical example: an arXiv paper search tool that AI models can use to fetch academic papers. Here's how to implement it:

Plain Text
 
import asyncio
from mcp.server.models import InitializationOptions
import mcp.types as types
from mcp.server import NotificationOptions, Server
import mcp.server.stdio
import arxiv

server = Server("mcp-scholarly")
client = arxiv.Client()

@server.list_tools()
async def handle_list_tools() -> list[types.Tool]:
    """
    List available tools.
    Each tool specifies its arguments using JSON Schema validation.
    """
    return [
        types.Tool(
            name="search-arxiv",
            description="Search arxiv for articles related to the given keyword.",
            inputSchema={
                "type": "object",
                "properties": {
                    "keyword": {"type": "string"},
                },
                "required": ["keyword"],
            },
        )
    ]

@server.call_tool()
async def handle_call_tool(
        name: str, arguments: dict | None
) -> list[types.TextContent | types.ImageContent | types.EmbeddedResource]:
    """
    Handle tool execution requests.
    Tools can modify server state and notify clients of changes.
    """
    if name != "search-arxiv":
        raise ValueError(f"Unknown tool: {name}")
    
    if not arguments:
        raise ValueError("Missing arguments")
        
    keyword = arguments.get("keyword")
    if not keyword:
        raise ValueError("Missing keyword")

    # Search arXiv papers
    search = arxiv.Search(
        query=keyword, 
        max_results=10, 
        sort_by=arxiv.SortCriterion.SubmittedDate
    )
    results = client.results(search)
    
    # Format results
    formatted_results = []
    for result in results:
        article_data = "\n".join([
            f"Title: {result.title}",
            f"Summary: {result.summary}",
            f"Links: {'||'.join([link.href for link in result.links])}",
            f"PDF URL: {result.pdf_url}",
        ])
        formatted_results.append(article_data)

    return [
        types.TextContent(
            type="text",
            text=f"Search articles for {keyword}:\n"
                 + "\n\n\n".join(formatted_results)
        ),
    ]


Key Components Explained

  1. Server initialization. The server is initialized with a unique name that identifies your MCP service.
  2. Tool registration. The @server.list_tools() decorator registers available tools and their specifications using JSON Schema.
  3. Tool implementation. The @server.call_tool() decorator handles the actual execution of the tool when called by an AI model.
  4. Response formatting. Tools return structured responses that can include text, images, or other embedded resources.

Best Practices for MCP Server Development

  1. Input validation. Always validate input parameters thoroughly using JSON Schema.
  2. Error handling. Implement comprehensive error handling to provide meaningful feedback.
  3. Resource management. Properly manage external resources and connections.
  4. Documentation. Provide clear descriptions of your tools and their parameters.
  5. Type safety. Use Python's type hints to ensure type safety throughout your code.

Testing Your MCP Server

There are two main ways to test your MCP server:

1. Using MCP Inspector

For development and debugging, the MCP Inspector provides a great interface to test your server:

Plain Text
 
npx @modelcontextprotocol/inspector uv --directory /your/project/path run your-server-name


The Inspector will display a URL that you can access in your browser to begin debugging.

2. Integration With Claude Desktop

To test your MCP server with Claude Desktop:

  1. Locate your Claude Desktop configuration file:
    • MacOS: ~/Library/Application Support/Claude/claude_desktop_config.json
    • Windows: %APPDATA%/Claude/claude_desktop_config.json
  2. Add your MCP server configuration:
Plain Text
 
{
  "mcpServers": {
    "mcp-scholarly": {
      "command": "uv",
      "args": [
        "--directory",
        "/path/to/your/mcp-scholarly",
        "run",
        "mcp-scholarly"
      ]
    }
  }
}


For published servers, you can use a simpler configuration:

Plain Text
 
{
  "mcpServers": {
    "mcp-scholarly": {
      "command": "uvx",
      "args": [
        "mcp-scholarly"
      ]
    }
  }
}


  1. Start Claude Desktop — you should now see your tool (e.g., "search-arxiv") available in the tools list:search-arxiv is available in the tools list


Testing checklist:

  • Verify tool registration and discovery
  • Test input validation
  • Check error handling
  • Validate response formatting
  • Ensure proper resource cleanup

Integration With AI Models

Once your MCP server is ready, it can be integrated with AI models that support the Model Context Protocol. The integration enables AI models to:

  • Discover available tools through the list_tools endpoint
  • Call specific tools with appropriate parameters
  • Process the responses and incorporate them into their interactions

For example, when integrated with Claude Desktop, your MCP tools appear in the "Available MCP Tools" list, making them directly accessible during conversations. The AI can then use these tools to enhance its capabilities — in our arXiv example, Claude can search and reference academic papers in real time during discussions.

Common Challenges and Solutions

  1. Async operations. Ensure proper handling of asynchronous operations to prevent blocking.
  2. Resource limits. Implement appropriate timeouts and resource limits.
  3. Error recovery. Design robust error recovery mechanisms.
  4. State management. Handle server state carefully in concurrent operations.

Conclusion

Building an MCP server opens up new possibilities for extending AI capabilities. By following this guide and best practices, you can create robust tools that integrate seamlessly with AI models. The example arXiv search implementation demonstrates how to create practical, useful tools that enhance AI functionality.

Whether you're building research tools, data processing services, or other AI-enhanced capabilities, the Model Context Protocol provides a standardized way to extend AI model functionality. Start building your own MCP server today and contribute to the growing ecosystem of AI tools and services.

My official MCP Scholarly server has been accepted as a community server in the MCP repository. You can find it under the community section here.

Resources

  • Model Context Protocol Documentation
  • MCP Official Repository
  • MCP Python SDK
  • MCP Python Server Creator
  • MCP Server Examples
  • arXiv API Documentation
  • Example arXiv Search MCP Server

For a deeper understanding of MCP and its capabilities, you can explore the official MCP documentation, which provides comprehensive information about the protocol specification and implementation details.

AI Tool Protocol (object-oriented programming)

Opinions expressed by DZone contributors are their own.

Related

  • Automatic Code Transformation With OpenRewrite
  • A Complete Guide to Modern AI Developer Tools
  • Evolution of Cloud Services for MCP/A2A Protocols in AI Agents
  • Building a Simple Todo App With Model Context Protocol (MCP)

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!