A server implementation of the Model Context Protocol (MCP) for Spanish language learning, providing tiered access to conversation practice, exercises, and language learning resources.
This project has been transformed from a client-side implementation into a full-fledged server, allowing multiple clients to access Spanish learning functionality without directly handling API keys or complex context generation. The server provides a RESTful API with tiered access controls, memory management, and comprehensive error handling.
Get the Spanish Learning MCP Server up and running in minutes with these quick start steps:
- Node.js 16+ and npm
- Anthropic API key (for Claude AI)
- Git
# Clone the repository
git clone https://github.com/yourusername/spanish-learning-mcp-server.git
cd spanish-learning-mcp-server
# Install dependencies
npm install
# Create .env file
echo "ANTHROPIC_API_KEY=your_api_key_here" > .env
# Start the server in development mode
npm run dev
# Or build and start in production mode
npm run build
npm start
The server will be available at http://localhost:3000 with the health check endpoint at http://localhost:3000/health.
Make a simple request to verify your server is working:
curl http://localhost:3000/health
# Should return {"status":"ok", "uptime": "..."}
# Create an API key (requires admin access)
curl -X POST http://localhost:3000/api/keys \
-H "Content-Type: application/json" \
-H "x-admin-key: your_admin_key" \
-d '{"userId": "test-user", "tier": "free"}'
# Make a query with context
curl -X POST http://localhost:3000/api/mcp/query \
-H "Content-Type: application/json" \
-H "x-api-key: your_api_key" \
-d '{"query": "How do I say hello in Spanish?", "contextType": "vocabulary"}'
The server includes comprehensive test coverage:
# Run all tests
npm test
# Run specific test suites
npm test -- --testPathPattern=auth
npm test -- --testPathPattern=conversation
# Run tests with coverage report
npm test -- --coverage
# Build and run with Docker
docker build -t spanish-learning-mcp-server .
docker run -p 3000:3000 -e ANTHROPIC_API_KEY=your_api_key spanish-learning-mcp-server
Deploy to Vercel:
npm install -g vercel
vercel
Deploy to Heroku:
npm install -g heroku
heroku create
git push heroku main
Run with PM2 (production process manager):
npm install -g pm2
pm2 start npm --name "spanish-mcp" -- start
The server follows a layered architecture with middleware for cross-cutting concerns:
graph TD
Client[Client Applications] --> RateLimit[Rate Limiting]
RateLimit --> Auth[Authentication]
Auth --> Router[Express Router]
subgraph "API Layer"
Router --> Health[Health Endpoints]
Router --> Context[Context Endpoints]
Router --> Conversation[Conversation Endpoints]
Router --> Exercise[Exercise Endpoints]
Router --> MCP[MCP Query Endpoints]
end
subgraph "Service Layer"
Context --> MCPService[MCP Service]
Conversation --> MCPService
Exercise --> MCPService
MCP --> MCPService
MCPService --> ContextManager[Context Manager]
MCPService --> AnthropicPool[Anthropic Connection Pool]
end
subgraph "Data & External Services"
ContextManager --> Cache[Memory Cache]
ContextManager --> DataStore[In-Memory Data Store]
AnthropicPool --> Claude[Anthropic Claude API]
end
subgraph "Cross-Cutting Concerns"
Logger[Logging Service]
ErrorHandler[Error Handler]
Cleanup[Resource Cleanup]
end
ErrorHandler -.-> Router
Logger -.-> All[All Components]
Cleanup -.-> DataStore
Cleanup -.-> Cache
Cleanup -.-> AnthropicPool
Key architectural components:
- RESTful API: Provides endpoints for MCP queries, context retrieval, conversations, and exercises
- Authentication: Secure API key-base
This tier structure ensures that users can start with the free tier to explore basic functionality, upgrade to the basic tier for more comprehensive learning features, and access the full suite of advanced capabilities with the premium tier. The system is designed to provide value at every tier while offering clear benefits for upgrading.
- Start Conversations: Initialize conversations on various topics with tier-specific limitations
- Continue Conversations: Add messages to existing conversations with context history
- Conversation History: View and manage past conversations
- Delete Conversations: Remove conversations that are no longer needed
- Automated Cleanup: Periodic cleanup of old conversations to prevent memory leaks
- Generate Exercises: Create customized exercises based on difficulty level and topic
- Check Exercise Answers: Submit answers and receive feedback
- Exercise Types: Various exercise types including vocabulary matching, multiple choice, fill-in-the-blank, etc.
- Detailed Feedback: Premium users receive more detailed feedback on their exercises
- Memory Management: Efficient memory usage with automatic cleanup
- Connection Pooling: Optimized API client pooling for concurrent requests
- Caching: Context and response caching to reduce API calls
- Graceful Shutdown: Proper resource cleanup during server shutdown
- Unit Tests: Comprehensive test coverage for all major functionality
- Integration Tests: End-to-end testing of API endpoints
- Error Handling Tests: Validation of error scenarios
- Memory Leak Tests: Verification of memory cleanup functionality
- Tier-Based Access Tests: Validation of tier-specific limitations
- JSDoc Comments: Complete documentation for all endpoints and functions
- Type Definitions: TypeScript type definitions for improved code safety
- Code Comments: Clear comments explaining complex logic
- API Documentation: Detailed endpoint descriptions with parameter and response information
The server follows a layered architecture with middleware for cross-cutting concerns:
graph TD
Client[Client Applications] --> RateLimit[Rate Limiting]
RateLimit --> Auth[Authentication]
Auth --> Router[Express Router]
subgraph "API Layer"
Router --> Health[Health Endpoints]
Router --> Context[Context Endpoints]
Router --> Conversation[Conversation Endpoints]
Router --> Exercise[Exercise Endpoints]
Router --> MCP[MCP Query Endpoints]
end
subgraph "Service Layer"
Context --> MCPService[MCP Service]
Conversation --> MCPService
Exercise --> MCPService
MCP --> MCPService
MCPService --> ContextManager[Context Manager]
MCPService --> AnthropicPool[Anthropic Connection Pool]
end
subgraph "Data & External Services"
ContextManager --> Cache[Memory Cache]
ContextManager --> DataStore[In-Memory Data Store]
AnthropicPool --> Claude[Anthropic Claude API]
end
subgraph "Cross-Cutting Concerns"
Logger[Logging Service]
ErrorHandler[Error Handler]
Cleanup[Resource Cleanup]
end
ErrorHandler -.-> Router
Logger -.-> All[All Components]
Cleanup -.-> DataStore
Cleanup -.-> Cache
Cleanup -.-> AnthropicPool
The following sequence diagram shows the flow of a typical MCP request through the system:
sequenceDiagram
participant Client
participant Auth as Authentication
participant Rate as Rate Limiter
participant Router
participant MCP as MCP Service
participant Context as Context Manager
participant Cache
participant Pool as Connection Pool
participant Claude as Anthropic Claude API
participant Cleanup as Resource Cleanup
Client->>Auth: Request with API Key
Auth->>Rate: Check Rate Limits
Rate->>Router: Route Request
Router->>MCP: Process Request
Note over MCP: Determine Context Type
MCP->>Context: Request Context
Context->>Cache: Check Cache
alt Cache Hit
Cache-->>Context: Return Cached Context
else Cache Miss
Context->>Context: Generate Context
Context->>Cache: Store in Cache
end
Context-->>MCP: Return Context
MCP->>Pool: Get Available Connection
Pool->>Claude: Send Query with Context
Claude-->>Pool: Return Response
Pool-->>MCP: Return Response
MCP->>MCP: Format Response
MCP-->>Router: Return Formatted Response
Router-->>Client: Send Response
Note over Cleanup: Periodic Cleanup (Async)
Cleanup->>Cache: Clean Stale Items
Cleanup->>Pool: Return Connections
flowchart TD
A[Client Request] --> B{Has API Key?}
B -- No --> C[Return 401 Unauthorized]
B -- Yes --> D{Validate API Key}
D -- Invalid --> E[Return 403 Forbidden]
D -- Valid --> F{Check User Tier}
F --> G[Add User Info to Request]
G --> H{Access Restricted Resource?}
H -- No --> I[Process Request]
H -- Yes --> J{User Has Required Tier?}
J -- No --> K[Return 403 Forbidden]
J -- Yes --> I
I --> L[Return Response]
The server is built using:
- Express.js: Web framework for handling API requests
- TypeScript: Strongly typed language for improved code quality
- Anthropic Claude API: AI model for generating responses
- Jest: Testing framework for validating functionality
- Pino: Structured logging for debugging and monitoring
# Install dependencies
npm install
# Set environment variables
export ANTHROPIC_API_KEY=your_api_key
export PORT=3000
# Start the server
npm start
The server can be containerized using Docker for consistent deployment across environments:
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build
ENV NODE_ENV=production
EXPOSE 3000
CMD ["node", "dist/server.js"]
Build and run the Docker image:
docker build -t spanish-learning-mcp-server .
docker run -p 3000:3000 -e ANTHROPIC_API_KEY=your_api_key spanish-learning-mcp-server
The server can be deployed to various cloud platforms:
AWS Elastic Beanstalk:
- Upload the application as a ZIP file
- Configure environment variables in the Elastic Beanstalk console
- Enable auto-scaling for handling varying loads
Google Cloud Run:
- Build and publish the Docker image to Container Registry
- Deploy to Cloud Run with memory settings optimized for LLM operations (min 1GB recommended)
- Configure concurrency based on expected traffic patterns
Azure App Service:
- Use the Node.js App Service plan
- Configure environment variables in the Application Settings
- Set up auto-scaling rules based on CPU/memory usage
For lower traffic scenarios, serverless deployment options can be cost-effective:
AWS Lambda with API Gateway:
- Package the application using AWS SAM or Serverless Framework
- Configure timeout settings appropriately (LLM requests may need longer timeouts)
- Implement connection pooling carefully due to Lambda's execution model
Google Cloud Functions:
- Deploy individual endpoints as separate functions
- Use Pub/Sub for background processing of resource-intensive operations
- Implement proper cleanup for cold starts/stops
The server implements structured logging using Pino, which should be integrated with your monitoring stack:
// Example of configuring Pino for production
const logger = pino({
level: process.env.LOG_LEVEL || "info",
transport:
process.env.NODE_ENV === "production"
? undefined
: { target: "pino-pretty" },
redact: ["req.headers.authorization", "req.headers.x-api-key"],
formatters: {
level: (label) => {
return { level: label };
},
},
});
For production environments, consider:
- Shipping logs to a centralized logging system (CloudWatch, Stackdriver, ELK Stack)
- Implementing log rotation for self-hosted environments
- Ensuring sensitive data like API keys are properly redacted
Monitor the following metrics:
-
Request metrics:
- Request rate per endpoint and tier
- Response times (p50, p95, p99)
- Error rates by endpoint and error type
-
Resource metrics:
- Memory usage (particularly important for large context operations)
- CPU utilization
- Connection pool utilization
-
Business metrics:
- Active users by tier
- Conversation and exercise creation rates
- API key usage patterns
Recommended tools include:
- Prometheus + Grafana for self-hosted monitoring
- DataDog, New Relic, or Dynatrace for commercial monitoring
- Cloud-native monitoring services (CloudWatch, Stackdriver) for cloud deployments
The server provides a /health
endpoint that should be used for:
- Load balancer health checks
- Container orchestration health probes
- Alerting on service degradation
Consider extending the health check to include:
- Database connectivity (when implemented)
- Anthropic API availability
- Resource availability checks
The server uses connection pooling for Anthropic API calls, which should be tuned based on your traffic patterns:
// Example tuning for high-traffic environments
const mcpConfig = new McpConfig({
apiKey: process.env.ANTHROPIC_API_KEY,
connectionPoolSize: 25, // Adjust based on expected concurrent requests
connectionPoolTimeout: 60000, // Increase for longer-running requests
cacheTTL: 1800000, // 30 minutes for production caching
enableCaching: true,
});
Optimize caching for your workload:
-
MCP Context Caching:
- Increase cache TTL for stable content
- Implement cache warming for common queries
- Consider distributed caching (Redis) for multi-instance deployments
-
Response Caching:
- Cache common queries at the API level
- Implement ETag support for client-side caching
- Use tiered caching strategies (memory β distributed β persistent)
The server includes cleanup routines for memory management that should be tuned for your deployment:
// Example of tuning memory cleanup parameters
export const CONVERSATION_MAX_AGE_MS = 14 * 24 * 60 * 60 * 1000; // 14 days retention
const CLEANUP_INTERVAL_MS = 30 * 60 * 1000; // Run cleanup every 30 minutes
Consider:
- Shorter retention periods for high-volume deployments
- More frequent cleanup intervals during peak hours
- Implementing memory limits per user/tier
- Using separate processes for memory-intensive operations
-
Key Management:
- Rotate API keys regularly
- Store keys in a secure vault (AWS Secrets Manager, HashiCorp Vault)
- Implement key revocation processes
-
Key Usage Policies:
- Restrict keys by IP address where possible
- Implement key-specific rate limits
- Log all key usage for audit purposes
-
TLS Configuration:
- Always use HTTPS in production
- Configure secure TLS parameters (TLS 1.2+, strong ciphers)
- Implement HSTS headers
-
API Protection:
- Consider using an API gateway for additional security
- Implement IP-based access controls for admin endpoints
- Use Web Application Firewall (WAF) rules to protect against common attacks
-
Least Privilege:
- Run containers/processes as non-root users
- Use minimal base images (Alpine, distroless)
- Implement proper network segmentation
-
Security Scanning:
- Scan dependencies for vulnerabilities (npm audit, Snyk)
- Implement container image scanning
- Perform regular penetration testing
-
Data Protection:
- Encrypt sensitive data at rest
- Implement proper data retention policies
- Ensure compliant handling of user data
-
Enhanced Authentication:
- Consider implementing OAuth2/OIDC for user authentication
- Add multi-factor authentication for administrative access
- Implement proper session management
-
Enhanced Authorization:
- Implement attribute-based access control (ABAC)
- Audit access patterns regularly
- Enforce principle of least privilege
GET /health
- Server health check
POST /api/keys
- Create API keys (admin only)GET /api/context
- Retrieve language contextPOST /api/mcp/query
- Query the MCP with contextGET /api/conversation/topics
- Get available conversation topicsPOST /api/conversation/start
- Start a new conversationPOST /api/conversation/continue
- Continue an existing conversationGET /api/conversation/:id
- Get conversation historyGET /api/conversation/history
- Get all user conversationsDELETE /api/conversation/:id
- Delete a conversationGET /api/exercise/types
- Get available exercise typesPOST /api/exercise/generate
- Generate exercisesPOST /api/exercise/check
- Check exercise answersGET /api/exercise/history
- Get exercise history
- Database integration for persistent storage
- User management and authentication
- Analytics and usage tracking
- Additional exercise types and conversation topics
- Mobile client applications
A comprehensive implementation of Model Context Protocol for Spanish language learning
Created by Danny Thompson (DThompsonDev) at This Dot Labs

Imagine this: You're trying to learn Spanish using an AI assistant. One day, you ask it how to say "hello," and it answers just fine. The next day, you ask a grammar questionβand it gives you a complex explanation way above your level. Itβs like starting a conversation with someone who has no memory of who you are or what you know.
Now flip the script. Imagine that every time you ask a question, the AI remembers:
- You're a beginner.
- Youβve been learning greetings and verbs.
- You like examples with food and travel.
Suddenly, it feels like youβve got a personal tutor who knows youβand tailors every answer to your level and preferences.
Thatβs what Model Context Protocol (MCP) is all about.
Letβs break down what MCP actually does:
- Context Retrieval β Pulls the right data (like beginner-level verbs or grammar rules) based on the user's question.
- Context Formatting β Structures the data into a markdown format that AI models (like Claude) can easily parse and use.
- Context Injection β Adds that formatted data directly into the system prompt sent to the model.
- Response Handling β Processes what the AI gives back and returns a clean, accurate answer.
- What is Model Context Protocol?
- Why MCP Matters
- Project Overview
- Getting Started
- Architecture
- How It Works
- Usage Examples
- Integrating with Other Projects
- Advanced Topics
- Troubleshooting
- Contributing
- License
Model Context Protocol (MCP) standardizes interactions between LLMs and external tools/data sources for integrating Large Language Models (LLMs) with domain-specific data to enhance their capabilities. It provides a structured way to:
- Retrieve relevant context from your application's data sources
- Format this context in a way that's optimal for LLM consumption
- Inject the context into prompts sent to the LLM
- Process and return responses in a consistent format
MCP acts as a bridge between your application's data and the LLM, ensuring that the model has access to the most relevant information when generating responses.
- Context Retrieval: Fetching relevant data based on user queries
- Context Formatting: Structuring data in a way that maximizes LLM understanding
- Context Injection: Adding the formatted context to prompts
- Response Processing: Handling and potentially post-processing LLM responses
Traditional LLM integration faces several challenges:
- Knowledge Cutoffs: LLMs only know what they were trained on
- Hallucinations: Models can generate plausible but incorrect information
- Lack of Domain Specificity: Generic responses that don't leverage your data
- Inconsistent Responses: Variations in output format and quality
MCP addresses these challenges by:
- Providing Up-to-Date Information: Using your application's current data
- Reducing Hallucinations: Grounding responses in factual context
- Enhancing Domain Specificity: Tailoring responses to your specific use case
- Ensuring Consistency: Standardizing how context is provided and responses are formatted
This project implements MCP for Spanish language learning, allowing Claude AI to provide accurate, contextual responses about Spanish vocabulary and grammar. It demonstrates:
- How to structure and organize an MCP implementation
- Techniques for context retrieval and formatting
- Methods for integrating with external data sources (Appwrite)
- Approaches for different types of context (vocabulary, grammar, mixed)
- β Integration with Claude AI via the Anthropic API
- β Context generation from vocabulary and grammar data
- β Appwrite database integration for data storage and retrieval
- β Multiple context types (vocabulary, grammar, mixed)
- β Interactive terminal interface with colored Spanish text highlighting
- β Consistent response formatting for better readability
- β Modular design for easy integration with other projects
- β TypeScript support for type safety and better developer experience
- β Interactive terminal interface for real-time queries
- β Modular design for easy integration with other projects
- Node.js 16.0.0 or later
- npm or yarn
- An Anthropic API key (for Claude AI)
- Optional: Appwrite account (for database integration)
- Optional: Docker for containerized deployment
-
Clone the repository
git clone https://github.com/yourusername/spanish-learning-mcp-server.git cd spanish-learning-mcp-server
-
Install dependencies
npm install
-
Set up environment variables
Create a
.env
file in the root directory with the following variables:# Anthropic API key for Claude (required) ANTHROPIC_API_KEY=your_anthropic_api_key # Server configuration PORT=3000 NODE_ENV=development LOG_LEVEL=info # Admin API key for creating user API keys (use a secure random value) ADMIN_API_KEY=your_secure_admin_key # Appwrite configuration (optional if using the demo with hardcoded data) NEXT_PUBLIC_APPWRITE_ENDPOINT=https://cloud.appwrite.io/v1 NEXT_PUBLIC_APPWRITE_PROJECT_ID=your_appwrite_project_id NEXT_PUBLIC_APPWRITE_DATABASE_ID=your_appwrite_database_id NEXT_PUBLIC_APPWRITE_VOCABULARY_COLLECTION_ID=your_vocabulary_collection_id NEXT_PUBLIC_APPWRITE_GRAMMAR_COLLECTION_ID=your_grammar_collection_id
-
Start the server in development mode
npm run dev
The server will start on http://localhost:3000 with automatic reloading for development.
-
Build and start in production mode
npm run build npm start
For production use, always build the project first and run the optimized version.
-
Run the server with Docker
# Build the Docker image docker build -t spanish-mcp-server . # Run the container docker run -p 3000:3000 -e ANTHROPIC_API_KEY=your_key spanish-mcp-server
The server uses API keys for authentication and tier management. To create API keys:
# Create a free tier API key
curl -X POST http://localhost:3000/api/keys \
-H "Content-Type: application/json" \
-H "x-admin-key: your_admin_key" \
-d '{"userId": "user1", "tier": "free", "name": "Test User"}'
# Create a premium tier API key
curl -X POST http://localhost:3000/api/keys \
-H "Content-Type: application/json" \
-H "x-admin-key: your_admin_key" \
-d '{"userId": "premium1", "tier": "premium", "name": "Premium User"}'
Once the server is running, you can test it with various API endpoints:
# Check server health
curl http://localhost:3000/health
# Test a conversation endpoint with your API key
curl -X GET http://localhost:3000/api/conversation/topics \
-H "x-api-key: your_api_key"
To connect a frontend application to the server:
// Example fetch request
async function queryMCP(question) {
const response = await fetch('http://localhost:3000/api/mcp/query', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'x-api-key': 'your_api_key'
},
body: JSON.stringify({
query: question,
contextType: 'vocabulary'
})
});
return await response.json();
}
See our TESTING_GUIDE.md for comprehensive examples of all endpoints.
The project follows a modular architecture designed to separate concerns and make the MCP implementation flexible and extensible.
spanish-learning-mcp/
βββ contexts/
β βββ claude/
β βββ claude-mcp.ts # Core MCP implementation
βββ examples/
β βββ spanish-mcp-demo.js # Comprehensive example with all context types
β βββ interactive-demo.js # Interactive terminal interface
β βββ frontend-integration.js # Example of integrating with other frontends
βββ lib/
β βββ appwrite.ts # Appwrite client and database helpers
β βββ mcp-module.js # Modular MCP implementation for other projects
βββ models/
β βββ common.ts # Shared type definitions
β βββ Grammar.ts # Grammar model definitions
β βββ Vocabulary.ts # Vocabulary model definitions
-
contexts/claude/claude-mcp.ts
: The heart of the MCP implementation. Defines theClaudeMcp
class that handles context retrieval, formatting, and interaction with the Claude AI API. -
lib/appwrite.ts
: Provides functions for interacting with the Appwrite database, including retrieving vocabulary and grammar data. -
lib/mcp-module.js
: A modular implementation of the MCP that can be easily imported and used in other projects. -
models/*.ts
: Define the data structures and types used throughout the application, ensuring type safety and consistency. -
examples/spanish-mcp-demo.js
: A comprehensive example that demonstrates how to use the MCP with different types of context. -
examples/interactive-demo.js
: An interactive terminal interface for querying the MCP in real-time.
The MCP retrieves context based on the user's query and specified parameters. Here's the actual code from our implementation:
// From claude-mcp.ts
async getContext(options: ContextOptions): Promise<string> {
let contextParts: string[] = [];
switch (options.contextType) {
case ContextType.VOCABULARY:
contextParts.push(await this.getVocabularyContext(options));
break;
case ContextType.GRAMMAR:
contextParts.push(await this.getGrammarContext(options));
break;
case ContextType.MIXED:
contextParts.push(await this.getVocabularyContext(options));
contextParts.push(await this.getGrammarContext(options));
break;
default:
contextParts.push(await this.getVocabularyContext(options));
}
return contextParts.join('\n\n');
}
The getVocabularyContext
method retrieves vocabulary items from the database (or hardcoded data in the demo) based on filters:
// From claude-mcp.ts
private async getVocabularyContext(options: ContextOptions): Promise<string> {
const filters = {};
if (options.categories?.length) {
// Filter by category
const wordCategories = options.categories.filter(
cat => Object.values(WordCategory).includes(cat as WordCategory)
) as WordCategory[];
if (wordCategories.length > 0) {
filters.category = wordCategories[0];
}
}
if (options.difficultyLevel) {
filters.difficultyLevel = options.difficultyLevel;
}
if (options.searchTerm) {
filters.searchTerm = options.searchTerm;
}
const result = await getVocabularyItems(
filters,
{ limit: options.maxItems || 10 }
);
return this.formatVocabularyForContext(result.items, options.includeExamples);
}
The retrieved data is formatted into a structured markdown format that Claude AI can easily understand:
// From claude-mcp.ts
private formatVocabularyForContext(items: VocabularyModel[], includeExamples: boolean = true): string {
if (items.length === 0) {
return "No vocabulary items found.";
}
let context = "# Spanish Vocabulary Reference\n\n";
// Group by category
const categorizedItems: Record<string, VocabularyModel[]> = {};
items.forEach(item => {
if (!categorizedItems[item.category]) {
categorizedItems[item.category] = [];
}
categorizedItems[item.category].push(item);
});
// Format each category and its items
Object.entries(categorizedItems).forEach(([category, categoryItems]) => {
context += `## ${this.capitalizeFirstLetter(category)}\n\n`;
categoryItems.forEach(item => {
context += `### ${item.word}\n`;
context += `- **Translation:** ${item.translation}\n`;
context += `- **Difficulty:** ${item.difficultyLevel}\n`;
if (item.notes) {
context += `- **Notes:** ${item.notes}\n`;
}
if (includeExamples && item.usageExamples && item.usageExamples.length > 0) {
context += "\n**Examples:**\n";
item.usageExamples.forEach((example: any) => {
context += `- Spanish: ${example.spanish}\n`;
context += ` English: ${example.english}\n`;
if (example.explanation) {
context += ` Explanation: ${example.explanation}\n`;
}
context += "\n";
});
}
context += "\n";
});
});
return context;
}
This produces a markdown document that looks like:
# Spanish Vocabulary Reference
## Greeting
### hola
- **Translation:** hello
- **Difficulty:** beginner
**Examples:**
- Spanish: Β‘Hola! ΒΏCΓ³mo estΓ‘s?
English: Hello! How are you?
- Spanish: Hola a todos.
English: Hello everyone.
### gracias
- **Translation:** thank you
- **Difficulty:** beginner
**Examples:**
- Spanish: Muchas gracias por tu ayuda.
English: Thank you very much for your help.
- Spanish: Gracias por venir.
English: Thank you for coming.
The formatted context is injected into the system prompt sent to Claude AI:
// From claude-mcp.ts
async queryWithContext(
userMessage: string,
contextOptions: ContextOptions
): Promise<string> {
const context = await this.getContext(contextOptions);
const systemPrompt = `You are a helpful Spanish language tutor. Use the following Spanish language reference materials to help answer the user's question:\n\n${context}
IMPORTANT: Always format your responses using the following structure:
1. Start with a brief introduction in English
2. For vocabulary words, use this format:
- [Spanish word/phrase] - [English translation]
3. For useful phrases, use this format:
- [Spanish phrase]
[English translation]
4. For grammar explanations, use this format:
### [Grammar Topic]
[Explanation in English]
Examples:
- Spanish: [Spanish example]
English: [English translation]
5. End with a brief conclusion or encouragement in English.`;
// Send to Claude AI...
}
This structured prompt ensures that Claude's responses follow a consistent format, making them easier to read and parse. The format also works well with the Spanish text highlighting in the interactive terminal interface.
In the demo implementation, this looks like:
// From spanish-mcp-demo.js
async queryWithContext(userMessage, contextType = 'vocabulary') {
let context = "";
// Get the appropriate context based on the type
switch (contextType) {
case 'vocabulary':
context = this.formatVocabularyContext();
break;
case 'grammar':
context = this.formatGrammarContext();
break;
case 'mixed':
context = this.formatVocabularyContext() + "\n\n" + this.formatGrammarContext();
break;
default:
context = this.formatVocabularyContext();
}
const systemPrompt = `You are a helpful Spanish language tutor. Use the following Spanish language reference materials to help answer the user's question:\n\n${context}`;
// Send to Claude AI...
}
The response from Claude AI is processed and returned to the caller:
// From claude-mcp.ts
try {
const response = await this.anthropic.messages.create({
model: this.config.model,
max_tokens: this.config.maxTokens,
temperature: this.config.temperature,
system: systemPrompt,
messages: [{ role: "user", content: userMessage }],
});
if (response.content[0].type === "text") {
return response.content[0].text;
} else {
return "No text response received from Claude";
}
} catch (error) {
console.error("Error querying Claude:", error);
throw new Error("Failed to get response from Claude");
}
import { createClaudeMcp, ContextType } from "./contexts/claude/claude-mcp.js";
// Create an instance of the MCP
const claudeMcp = createClaudeMcp("your_anthropic_api_key");
// Query Claude with vocabulary context
const response = await claudeMcp.queryWithContext(
'How do I say "hello" in Spanish?',
{
contextType: ContextType.VOCABULARY,
maxItems: 5,
includeExamples: true,
}
);
console.log(response);
Here's how to use different context types from our demo implementation:
// From spanish-mcp-demo.js
// Vocabulary example
const vocabQuestion = 'How do I say "hello" and "thank you" in Spanish?';
const vocabResponse = await mcp.queryWithContext(vocabQuestion, "vocabulary");
// Grammar example
const grammarQuestion =
'How do I conjugate the verb "hablar" in the present tense?';
const grammarResponse = await mcp.queryWithContext(grammarQuestion, "grammar");
// Mixed context example
const mixedQuestion =
'How do I use "gracias" in a sentence with the correct verb conjugation?';
const mixedResponse = await mcp.queryWithContext(mixedQuestion, "mixed");
The interactive terminal interface allows you to query the MCP in real-time:
// From interactive-demo.js
const askQuestion = () => {
rl.question(chalk.green("> "), async (input) => {
// Check for special commands
if (input.trim() === "/exit") {
console.log(chalk.yellow("Goodbye! π"));
rl.close();
return;
} else if (input.trim() === "/vocab") {
contextType = "vocabulary";
console.log(chalk.yellow(`Switched to vocabulary context`));
askQuestion();
return;
} else if (input.trim() === "/grammar") {
contextType = "grammar";
console.log(chalk.yellow(`Switched to grammar context`));
askQuestion();
return;
} else if (input.trim() === "/mixed") {
contextType = "mixed";
console.log(chalk.yellow(`Switched to mixed context`));
askQuestion();
return;
}
// Process the user's question
if (input.trim()) {
console.log(
chalk.blue("Querying Claude with " + contextType + " context...")
);
try {
const response = await mcp.queryWithContext(input, contextType);
// Highlight Spanish words in the response
const highlightedResponse = highlightSpanishWords(response);
console.log(chalk.cyan("Claude's response:"));
console.log(highlightedResponse);
} catch (error) {
console.error(chalk.red("Error:"), error.message);
}
}
// Ask for the next question
askQuestion();
});
};
The interactive terminal interface includes a feature to highlight Spanish words and phrases in cyan, making them easier to identify:
// From interactive-demo.js
function highlightSpanishWords(text) {
if (!text) return "";
// Split the text into lines to process each line separately
const lines = text.split("\n");
const processedLines = lines.map((line) => {
// Skip processing if the line is a code block or URL
if (line.includes("```") || line.includes("http")) {
return line;
}
// Skip processing if the line is likely an English conclusion
if (line.includes("Remember") || line.includes("practice")) {
// Only highlight the Spanish phrase at the end if it exists
if (line.includes("Β‘Buen provecho!")) {
return line.replace(/(Β‘Buen provecho!)/, chalk.cyan("$1"));
}
return line;
}
// Process Spanish phrases with dash/hyphen format (common in vocabulary lists)
if (line.match(/^[-β’]\s+([^-]+)\s+-\s+/)) {
return line.replace(
/^([-β’]\s+)([^-]+)(\s+-\s+)(.+)$/,
(match, bullet, spanish, separator, english) => {
return bullet + chalk.cyan(spanish) + separator + english;
}
);
}
// Process lines with Spanish word/phrase followed by dash and English translation
if (line.match(/^([^-]+)\s+-\s+/)) {
return line.replace(
/^([^-]+)(\s+-\s+)(.+)$/,
(match, spanish, separator, english) => {
return chalk.cyan(spanish) + separator + english;
}
);
}
// Highlight Spanish phrases with special characters
if (line.includes("Spanish:") || /[ÑéΓΓ³ΓΊΓΌΓ±ΒΏΒ‘]/.test(line)) {
// If it's a line with "Spanish:" label, highlight everything after the colon
if (line.includes("Spanish:")) {
return line.replace(/(Spanish:)(.*)/, (match, label, content) => {
return label + chalk.cyan(content);
});
}
// Highlight words with Spanish characters
line = line.replace(/\b\w*[ÑéΓΓ³ΓΊΓΌΓ±ΓΓΓΓΓΓΓ]\w*\b/g, (match) =>
chalk.cyan(match)
);
}
return line;
});
return processedLines.join("\n");
}
This highlighting feature makes it easier to:
- Identify Spanish vocabulary words and phrases
- Distinguish between Spanish and English text
- See the structure of responses clearly
- Focus on the Spanish language elements
The MCP is designed to be easily integrated with other projects. We provide a modular implementation in lib/mcp-module.js
that can be imported and used in any JavaScript or TypeScript project.
import {
createSpanishMcp,
ContextType,
ContextOptions,
} from "./lib/mcp-module.js";
// Create an MCP instance with your API key
const mcp = createSpanishMcp("your_anthropic_api_key");
// Query the MCP
const options = new ContextOptions({
contextType: ContextType.VOCABULARY,
maxItems: 10,
includeExamples: true,
});
const response = await mcp.queryWithContext(
'How do I say "hello" in Spanish?',
options
);
console.log(response);
You can provide your own vocabulary and grammar data:
const customData = {
vocabulary: [
{
word: "hola",
translation: "hello",
category: "greeting",
difficultyLevel: "beginner",
usageExamples: [
{ spanish: "Β‘Hola! ΒΏCΓ³mo estΓ‘s?", english: "Hello! How are you?" },
{ spanish: "Hola a todos.", english: "Hello everyone." },
],
},
// More vocabulary items...
],
grammar: [
{
title: "Present Tense Conjugation",
category: "verb_tense",
difficultyLevel: "beginner",
explanation:
"In Spanish, verbs in the present tense change their endings...",
examples: [
{ spanish: "Yo hablo espaΓ±ol.", english: "I speak Spanish." },
// More examples...
],
},
// More grammar rules...
],
};
const mcp = createSpanishMcp("your_anthropic_api_key", { customData });
The MCP module allows you to customize the format of responses:
// Create an MCP instance with custom configuration
const mcp = createSpanishMcp("your_anthropic_api_key", {
customData,
model: "claude-3-haiku-20240307", // Use a different Claude model
maxTokens: 2000, // Increase max tokens
temperature: 0.5, // Lower temperature for more consistent responses
});
The MCP module is framework-agnostic and can be integrated with various frontend and backend frameworks:
import { useState, useEffect } from "react";
import { createSpanishMcp, ContextType } from "./lib/mcp-module.js";
function SpanishTutor() {
const [query, setQuery] = useState("");
const [response, setResponse] = useState("");
const [loading, setLoading] = useState(false);
const mcp = createSpanishMcp(process.env.REACT_APP_ANTHROPIC_API_KEY);
const handleSubmit = async (e) => {
e.preventDefault();
setLoading(true);
try {
const result = await mcp.queryWithContext(query, {
contextType: ContextType.MIXED,
});
setResponse(result);
} catch (error) {
console.error(error);
} finally {
setLoading(false);
}
};
return (
<div>
<form onSubmit={handleSubmit}>
<input value={query} onChange={(e) => setQuery(e.target.value)} />
<button type="submit" disabled={loading}>
Ask
</button>
</form>
{response && <div>{response}</div>}
</div>
);
}
import express from "express";
import { createSpanishMcp, ContextType } from "./lib/mcp-module.js";
const app = express();
app.use(express.json());
// Initialize MCP
const mcp = createSpanishMcp(process.env.ANTHROPIC_API_KEY);
app.post("/api/spanish-tutor", async (req, res) => {
try {
const { query, contextType = ContextType.VOCABULARY } = req.body;
if (!query) {
return res.status(400).json({ error: "Query is required" });
}
const response = await mcp.queryWithContext(query, { contextType });
res.json({ response });
} catch (error) {
res.status(500).json({ error: error.message });
}
});
app.listen(3000, () => {
console.log("Server running on port 3000");
});
Here's an example of using the MCP in a React component:
import { useState, useMemo } from "react";
import {
createSpanishMcp,
ContextType,
ContextOptions,
} from "../lib/mcp-module.js";
function SpanishTutor() {
const [query, setQuery] = useState("");
const [response, setResponse] = useState("");
const [contextType, setContextType] = useState(ContextType.VOCABULARY);
const [isLoading, setIsLoading] = useState(false);
// Initialize MCP
const mcp = useMemo(() => {
return createSpanishMcp("your_anthropic_api_key");
}, []);
const handleSubmit = async (e) => {
e.preventDefault();
if (!query.trim()) return;
setIsLoading(true);
try {
const options = new ContextOptions({
contextType,
maxItems: 10,
includeExamples: true,
});
const result = await mcp.queryWithContext(query, options);
setResponse(result);
} catch (error) {
console.error("Error:", error);
setResponse("Error: " + error.message);
} finally {
setIsLoading(false);
}
};
return (
<div>
<h1>Spanish Tutor</h1>
{/* Context type selector */}
<div>
<label>
<input
type="radio"
value={ContextType.VOCABULARY}
checked={contextType === ContextType.VOCABULARY}
onChange={() => setContextType(ContextType.VOCABULARY)}
/>
Vocabulary
</label>
{/* Other radio buttons... */}
</div>
{/* Query form */}
<form onSubmit={handleSubmit}>
<textarea
value={query}
onChange={(e) => setQuery(e.target.value)}
placeholder="Ask a question about Spanish..."
/>
<button type="submit" disabled={isLoading}>
{isLoading ? "Loading..." : "Ask"}
</button>
</form>
{/* Response display */}
{response && (
<div>
<h2>Response:</h2>
<div>{response}</div>
</div>
)}
</div>
);
}
import express from "express";
import {
createSpanishMcp,
ContextType,
ContextOptions,
} from "../lib/mcp-module.js";
const app = express();
app.use(express.json());
// Initialize MCP
const apiKey = process.env.ANTHROPIC_API_KEY;
const mcp = createSpanishMcp(apiKey, {
useAppwrite: true, // Use Appwrite for data instead of custom data
});
// API endpoint for querying the MCP
app.post("/api/spanish-tutor", async (req, res) => {
try {
const { query, contextType = ContextType.VOCABULARY } = req.body;
if (!query) {
return res.status(400).json({ error: "Query is required" });
}
const options = new ContextOptions({
contextType,
maxItems: 10,
includeExamples: true,
});
const response = await mcp.queryWithContext(query, options);
res.json({ response });
} catch (error) {
console.error("Error:", error);
res.status(500).json({ error: error.message });
}
});
app.listen(3000, () => {
console.log("Server running on port 3000");
});
For Next.js applications, you can create an API route that uses the MCP module:
// pages/api/spanish-tutor.js
import { createSpanishMcp, ContextType } from "../../lib/mcp-module.js";
// Initialize MCP outside of handler to reuse connections
const mcp = createSpanishMcp(process.env.ANTHROPIC_API_KEY, {
connectionPoolSize: 5, // Optimize for serverless
});
export default async function handler(req, res) {
if (req.method !== "POST") {
return res.status(405).json({ error: "Method not allowed" });
}
try {
const { query, contextType = ContextType.VOCABULARY } = req.body;
if (!query) {
return res.status(400).json({ error: "Query is required" });
}
const response = await mcp.queryWithContext(query, { contextType });
res.status(200).json({ response });
} catch (error) {
console.error("Error processing request:", error);
res
.status(500)
.json({ error: "An error occurred while processing your request" });
}
}
For NestJS applications, create a service and controller:
// spanish.service.ts
import { Injectable } from "@nestjs/common";
import { createSpanishMcp, ContextType } from "../lib/mcp-module.js";
@Injectable()
export class SpanishService {
private mcp;
constructor() {
this.mcp = createSpanishMcp(process.env.ANTHROPIC_API_KEY);
}
async queryWithContext(
query: string,
contextType: string = ContextType.VOCABULARY
) {
return this.mcp.queryWithContext(query, { contextType });
}
}
// spanish.controller.ts
import { Controller, Post, Body } from "@nestjs/common";
import { SpanishService } from "./spanish.service";
@Controller("spanish")
export class SpanishController {
constructor(private readonly spanishService: SpanishService) {}
@Post("query")
async query(@Body() body: { query: string; contextType?: string }) {
const { query, contextType } = body;
const response = await this.spanishService.queryWithContext(
query,
contextType
);
return { response };
}
}
For serverless Firebase deployments:
// functions/index.js
const functions = require("firebase-functions");
const { createSpanishMcp, ContextType } = require("./lib/mcp-module.js");
// Initialize MCP with connection pooling optimized for serverless
const mcp = createSpanishMcp(process.env.ANTHROPIC_API_KEY, {
connectionPoolSize: 1, // Lower for serverless
cacheTTL: 3600000, // 1 hour cache for serverless efficiency
});
exports.spanishTutor = functions.https.onRequest(async (req, res) => {
// Set CORS headers
res.set("Access-Control-Allow-Origin", "*");
if (req.method === "OPTIONS") {
res.set("Access-Control-Allow-Methods", "POST");
res.set("Access-Control-Allow-Headers", "Content-Type");
res.status(204).send("");
return;
}
if (req.method !== "POST") {
res.status(405).send("Method Not Allowed");
return;
}
try {
const { query, contextType = ContextType.VOCABULARY } = req.body;
if (!query) {
res.status(400).send({ error: "Query is required" });
return;
}
const response = await mcp.queryWithContext(query, { contextType });
res.status(200).send({ response });
} catch (error) {
console.error("Error:", error);
res.status(500).send({ error: error.message || "Internal Server Error" });
}
});
---
## π¬ Advanced Topics
### Customizing Context Retrieval
You can customize how context is retrieved by modifying the `getVocabularyContext` and `getGrammarContext` methods in `claude-mcp.ts`. For example, you might want to:
```typescript
// Example: Adding semantic search to getVocabularyContext
private async getVocabularyContext(options: ContextOptions): Promise<string> {
// ... existing code ...
// Add semantic search
if (options.semanticQuery) {
const embeddings = await this.getEmbeddings(options.semanticQuery);
const semanticResults = await this.searchByEmbeddings(embeddings);
// Combine with other results or use directly
result.items = [...result.items, ...semanticResults];
}
return this.formatVocabularyForContext(result.items, options.includeExamples);
}
The format of the context can significantly impact the quality of responses. Consider:
// Example: Enhanced formatting with additional metadata
private formatVocabularyForContext(items: VocabularyModel[], includeExamples: boolean = true): string {
// ... existing code ...
// Add metadata section to help the model understand the structure
context = "# Spanish Vocabulary Reference\n\n" +
"## Metadata\n" +
`- Total items: ${items.length}\n` +
`- Categories: ${Array.from(new Set(items.map(item => item.category))).join(', ')}\n` +
`- Difficulty levels: ${Array.from(new Set(items.map(item => item.difficultyLevel))).join(', ')}\n\n` +
context;
return context;
}
The MCP can be extended to support additional features:
// Example: Adding conversation history support
export interface ConversationContext {
history: { role: 'user' | 'assistant', content: string }[];
}
async queryWithContextAndHistory(
userMessage: string,
contextOptions: ContextOptions,
conversationContext: ConversationContext
): Promise<string> {
const context = await this.getContext(contextOptions);
const systemPrompt = `You are a helpful Spanish language tutor. Use the following Spanish language reference materials to help answer the user's question:\n\n${context}`;
// Include conversation history in the messages
const messages = [
...conversationContext.history,
{ role: 'user', content: userMessage }
];
// Send to Claude AI...
}
Issue: API key errors when running the demo
Solution: Ensure your Anthropic API key is correctly set in .env.local
Issue: Module resolution errors Solution: The project uses a mixed module system with NodeNext resolution. Ensure your imports use the correct file extensions (.js for ES modules, .mjs for explicit ES modules).
Issue: TypeScript errors
Solution: Run npm run build
to check for TypeScript errors. Make sure you're using Node.js 16+ and TypeScript 5+.
Issue: Tests fail with timeouts
Solution: Some tests may time out if rate limiting is happening. Add jest.setTimeout(30000)
to increase timeout, or use the --testTimeout=30000
flag.
Issue: Tests fail with auth errors
Solution: Ensure your test environment includes mock API keys or proper auth setup in .env.test
.
Issue: Memory issues during testing
Solution: Run tests with NODE_OPTIONS="--max-old-space-size=4096" npm test
to allocate more memory.
Error Code | Description | Solution |
---|---|---|
AUTH_001 |
Invalid API key | Check your API key in headers or env variables |
AUTH_002 |
Insufficient tier access | Upgrade to a higher tier or modify your query |
RATE_001 |
Rate limit exceeded | Wait before trying again or upgrade tier |
MCP_001 |
Context generation failed | Check your query and context parameters |
MCP_002 |
Claude API error | Check logs for detailed error from Anthropic |
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add some amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
Made with β€οΈ by Danny Thompson (DThompsonDev) at This Dot Labs
Twitter β’ GitHub β’ This Dot Labs