Back to resources

Tech Exploration: Testing the Model Context Protocol for AI Integrations

Welcome to Tech Exploration, where Ketryon tests cutting-edge tools to power modern solutions. In this edition, we dive into the Model Context Protocol (MCP).

Tech Exploration: Testing the Model Context Protocol for AI Integrations

Image credit: Photo by @theshubhamdhage on Unsplash

AIMachine LearningIntegrationProtocol
By Kenny TranPublished on 4/1/2025Last updated on 4/5/2025

Introduction

At Ketryon, we're always on the lookout for technologies that streamline development and deliver value to businesses. The Model Context Protocol (MCP), launched by Anthropic in November 2024, caught our attention for its promise to standardize how AI models connect to external data and tools. As developers, we're excited to explore MCP's potential to enhance AI-driven applications. In this Tech Exploration, we test MCP by building a Todo app with natural language support, sharing what we learned and how it can empower global and Swedish businesses. Let's dive into why MCP is a game-changer for AI integrations!

What Is the Model Context Protocol?

MCP is an open-source protocol that standardizes how AI applications, like Claude or custom agents, connect to external data sources (e.g., databases, GitHub) and tools (e.g., Slack, Notion). Think of it as a USB-C port for AI: just as USB-C unifies device connections, MCP provides a single interface for AI to access diverse systems, replacing custom integrations.

Unlike traditional APIs, where each service requires unique code, MCP uses a client-server architecture:

  • MCP Host: The AI app (e.g., Claude Desktop) initiating requests.
  • MCP Client: Manages secure connections to servers within the host.
  • MCP Server: Exposes tools, resources, or prompts (e.g., a GitHub server fetching issues).

MCP leverages JSON-RPC 2.0 for communication, enabling dynamic tool discovery and real-time context updates. It’s designed for flexibility, security, and scalability, making it ideal for complex AI workflows.

Why It's Relevant

MCP addresses a key AI challenge: isolation from real-time data. LLMs often rely on static training data, requiring manual inputs for current information. MCP breaks these silos, enabling AI to fetch live data—think Slack messages or GitHub PRs—seamlessly.

  1. For Businesses: MCP streamlines workflows. For example, sales teams can use AI to auto-log CRM notes (e.g., Salesforce) or draft emails based on Slack discussions, saving up to 72% of admin time. In Sweden, where GDPR compliance is critical, MCP’s granular permissions ensure secure data access.
  2. For Developers: MCP reduces integration complexity. Instead of writing custom API wrappers, developers build or use pre-built MCP servers, reusable across LLMs. Over 1,000 community-built servers exist, from Google Drive to Postgres.
  3. Industry Trend: Launched in November 2024, MCP's GitHub stars surged in early 2025, with adopters like Block and IntelliJ IDEA integrating it. Upwork jobs increasingly mention MCP for AI-driven IDEs or workflows, signaling its rise.

Our Test Drive

To explore MCP, we built a TypeScript-based Todo app that lets Claude Desktop manage tasks using natural language prompts, such as “Add a task to buy groceries tomorrow.” We leveraged Node.js and the MCP TypeScript SDK to create an MCP server.

Setup

We followed Anthropic's docs when setting up the Node.js project with TypeScript:

npm init -y
npm install @mcp/core typescript ts-node @types/node
npx tsc --init

We created a Todo app server (todo-server.ts) to handle tasks stored in an in-memory array (simulating a database for simplicity):

import { Server } from '@mcp/core';
import { stdioServer } from '@mcp/server';

interface Task {
  id: number;
  text: string;
  dueDate?: string;
}

const tasks: Task[] = [];
const app = new Server('todo-server');

app.callTool('add_task', async (params: { text: string; dueDate?: string }) => {
  const task: Task = { id: tasks.length + 1, text: params.text, dueDate: params.dueDate };
  tasks.push(task);
  return { success: true, task };
});

app.callTool('list_tasks', async () => {
  return { tasks };
});

async function main() {
  const streams = await stdioServer();
  await app.run(streams[0], streams[1], app.createInitializationOptions());
}

main().catch(console.error);

We compiled and ran the server:

npx tsc
node dist/todo-server.js

Connecting to Claude

We configured Claude Desktop's claude_desktop_config.json to use our server:

{
  "mcpServers": {
    "todo": {
      "command": "node",
      "args": ["dist/todo-server.js"]
    }
  }
}

After restarting Claude, we prompted: “Add a task to buy groceries tomorrow.” Claude sent the request via MCP, and our server responded, adding the task. We then asked, “List all tasks,” and Claude returned:

Tasks:
1. Buy groceries (Due: Tomorrow)

Learnings

MCP's tool discovery was seamless—Claude recognized our add_task and list_tasks tools instantly. The TypeScript SDK's type safety helped us catch parameter errors early, reinforcing our love for typed JavaScript. The server ran lightweight and fast, and integrating with Claude felt intuitive, hinting at MCP's potential for real-world AI apps.

References

  1. https://modelcontextprotocol.io/introduction
  2. https://www.anthropic.com/news/model-context-protocol
  3. https://github.com/modelcontextprotocol/typescript-sdk