Clawist
📖 Guide8 min read••By Lin

MCP Tutorial: Build AI Apps with Model Context Protocol in 2026

MCP Tutorial: Build AI Apps with Model Context Protocol in 2026

Model Context Protocol (MCP) is the hottest new way to build AI applications in 2026. Designed by Anthropic, MCP provides a standardized way to give large language models access to your data and tools—think of it as a USB-C port for AI applications.

The concept is simple: instead of hardcoding integrations for each AI model, MCP creates a universal protocol that any AI client can use. Claude Desktop, Cursor, Windsurf, and even the OpenAI Agents SDK now support MCP.

This guide walks you through building your own MCP server from scratch, connecting it to Claude, and understanding when MCP makes sense for your projects.

What is Model Context Protocol?

MCP architecture diagram MCP creates a standardized bridge between AI models and your data

MCP solves a fundamental problem: AI models are powerful, but they're limited to their training data. Your business data, databases, APIs, and files aren't accessible unless you explicitly provide them.

Before MCP:

  • Copy-paste data into chat interfaces
  • Build custom integrations for each AI tool
  • No standardized way to give AI access to your systems

With MCP:

  • Universal protocol any AI client can use
  • Standardized resources and tools
  • Plug-and-play connections between AI and data

The architecture:

  • Client - AI interface like Claude Desktop, Cursor, or OpenClaw
  • Server - Your code that defines resources and tools
  • Transport - How they communicate (stdio, HTTP, SSE)

If you've used REST APIs, you'll find MCP intuitive. Resources are like GET requests (fetch data), and tools are like POST requests (perform actions).

Why MCP Matters for Developers

MCP development workflow MCP enables powerful AI integrations with minimal code

MCP is already being used for serious applications:

  • Automated trading - AI agents managing stock and crypto portfolios
  • Web scraping - Industrial-scale data extraction with AI decision-making
  • Cloud infrastructure - Managing Kubernetes clusters via natural language
  • Database operations - AI-powered data analysis and modifications
  • File management - Organizing and processing documents automatically

The protocol's power comes from standardization. Build one MCP server, and it works with any MCP-compatible client. As more AI tools adopt MCP, your integrations become more valuable.

For OpenClaw users, MCP support means you can extend your AI assistant with custom data sources and actions without modifying OpenClaw's core code.

Step 1: Set Up Your MCP Server Project

MCP server project setup Initialize a TypeScript project with the MCP SDK

The official MCP SDK supports TypeScript, Python, Java, and more. We'll use TypeScript for this tutorial.

Create a new project:

mkdir my-mcp-server
cd my-mcp-server
npm init -y
npm install @anthropic-ai/mcp zod

Create the server file:

// main.ts
import { MCPServer } from '@anthropic-ai/mcp';
import { z } from 'zod';

const server = new MCPServer({
  name: 'my-mcp-server',
  version: '1.0.0'
});

// We'll add resources and tools here

server.run();

The MCPServer class handles all the protocol details. You just define what data and actions to expose.

Step 2: Define Resources for Data Access

Database resources for AI access Resources provide read-only data access for AI agents

Resources are how you give the AI access to your data. They're read-only—the AI can view them but not modify.

Example: Database query resource:

import postgres from 'postgres';

const sql = postgres(process.env.DATABASE_URL);

server.addResource(
  'active-users',
  'postgres://users/active',
  async () => {
    const users = await sql`
      SELECT id, name, email, last_login 
      FROM users 
      WHERE last_login > NOW() - INTERVAL '7 days'
    `;
    return {
      mimeType: 'application/json',
      content: JSON.stringify(users)
    };
  }
);

Example: File system resource:

import fs from 'fs/promises';

server.addResource(
  'config-file',
  'file:///app/config.json',
  async () => {
    const content = await fs.readFile('/app/config.json', 'utf8');
    return {
      mimeType: 'application/json',
      content
    };
  }
);

When you attach a resource in Claude Desktop, it fetches this data and includes it in the conversation context. The AI can then answer questions about your specific data.

Step 3: Define Tools for Actions

API tools for AI actions Tools enable AI to perform actions and modify data

Tools are more powerful than resources—they can modify data, call APIs, and perform computations. Use Zod to validate the inputs so the AI doesn't send garbage.

Example: Database write tool:

const CreateUserSchema = z.object({
  name: z.string().describe('Full name of the user'),
  email: z.string().email().describe('Valid email address'),
  role: z.enum(['admin', 'user', 'viewer']).describe('User role')
});

server.addTool(
  'create-user',
  'Creates a new user in the database',
  CreateUserSchema,
  async (input) => {
    const validated = CreateUserSchema.parse(input);
    
    const result = await sql`
      INSERT INTO users (name, email, role)
      VALUES (${validated.name}, ${validated.email}, ${validated.role})
      RETURNING id
    `;
    
    return {
      success: true,
      userId: result[0].id
    };
  }
);

Example: API call tool:

const SendNotificationSchema = z.object({
  userId: z.string().describe('User ID to notify'),
  message: z.string().describe('Notification message'),
  channel: z.enum(['email', 'sms', 'push']).describe('Delivery channel')
});

server.addTool(
  'send-notification',
  'Sends a notification to a user',
  SendNotificationSchema,
  async (input) => {
    const response = await fetch('https://api.example.com/notify', {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify(input)
    });
    
    return { success: response.ok };
  }
);

Key insight: The Zod schema descriptions are crucial. They tell the AI what each field means, making it much more likely to provide correct values.

Step 4: Connect to Claude Desktop

Claude Desktop MCP configuration Configure Claude Desktop to use your MCP server

Once your server is ready, connect it to Claude Desktop:

  1. Open Claude Desktop settings
  2. Navigate to Developer → MCP Servers
  3. Add your server configuration

Configuration file location:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json

Add your server:

{
  "mcpServers": {
    "my-server": {
      "command": "npx",
      "args": ["ts-node", "/path/to/my-mcp-server/main.ts"],
      "env": {
        "DATABASE_URL": "postgres://..."
      }
    }
  }
}

Restart Claude Desktop, and your server should appear in the MCP panel. You can now:

  • Attach resources to conversations
  • Ask Claude to use your tools
  • Give Claude access to your real data

Step 5: Deploy to Production

Cloud deployment for MCP servers Deploy your MCP server for remote access

For local development, stdio transport works fine. For production or remote access, switch to HTTP or Server-Sent Events (SSE).

HTTP transport setup:

import { createServer } from 'http';

const httpServer = createServer();

server.run({
  transport: 'http',
  httpServer,
  port: 3001
});

console.log('MCP server running on port 3001');

Docker deployment:

FROM node:20-slim
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
CMD ["npx", "ts-node", "main.ts"]

Deploy to any container platform—your MCP server is now accessible from anywhere.

MCP vs REST APIs

API comparison diagram When to use MCP vs traditional REST APIs

MCP isn't replacing REST APIs—it's an additional layer for AI interactions.

Use MCP when:

  • Building AI-powered features
  • Want plug-and-play AI integration
  • Need the AI to understand context
  • Working with MCP-compatible clients

Use REST when:

  • Building traditional web/mobile apps
  • Need fine-grained control over requests
  • Working with non-AI clients
  • Performance is critical

Many MCP servers actually wrap existing REST APIs, creating an "API for your API" that makes the data AI-accessible.

Conclusion

MCP development complete MCP opens new possibilities for AI-powered applications

Model Context Protocol represents a shift in how we build AI applications. Instead of each tool having its own integration method, MCP provides a universal standard that any AI client can use.

For developers, this means:

  • Build once, use everywhere
  • Standardized data access patterns
  • Clear separation of resources and tools
  • Growing ecosystem of compatible clients

Get started today:

The AI coding revolution is happening now. MCP is how you'll build the tools that power it.

FAQ

MCP frequently asked questions Common questions about Model Context Protocol

Does MCP only work with Claude?

No. While Anthropic created MCP, it's an open standard. Cursor, Windsurf, OpenAI Agents SDK, and other tools support it. OpenClaw also supports MCP servers for extended functionality.

Is MCP secure for production use?

MCP itself is just a protocol—security depends on your implementation. Use proper authentication, validate all inputs with Zod, and deploy behind appropriate network controls.

Can I use MCP with local models like Ollama?

Currently, MCP clients are mainly commercial products. However, you can build your own MCP client that connects to Ollama, or wait for the ecosystem to mature.

How does MCP compare to function calling?

MCP is higher-level than function calling. It provides a standardized protocol for multiple clients, while function calling is specific to each AI provider's API.

What's the performance overhead?

Minimal for most use cases. The protocol is lightweight, and the overhead is typically dwarfed by actual AI inference time.