Skip to main content
Back to Guides

Running MCP Servers with Docker

Containerize your MCP servers for better isolation, reproducibility, and easier deployment.

Marcus Chen
Updated January 15, 2025
12 min read

Docker provides an excellent way to run MCP servers in isolated environments. This is especially valuable for servers that require specific dependencies, access sensitive resources, or need to be deployed consistently across different machines.

1. Why Use Docker for MCP?

Running MCP servers in Docker containers offers several advantages:

  • Isolation: Each server runs in its own container with controlled access to the host system
  • Reproducibility: The same container runs identically on any machine with Docker
  • Dependency Management: No conflicts between servers requiring different library versions
  • Security: Limit what resources a server can access through container boundaries
  • Easy Deployment: Ship your server as a single container image
  • Version Control: Tag and manage different versions of your server

When Docker Makes Sense

  • Servers with complex dependencies (database drivers, system libraries)
  • Production deployments requiring consistency
  • Security-sensitive servers that need isolation
  • Servers that need to run on multiple platforms
  • Team environments where everyone needs the same setup

2. Basic Dockerfile

Here's a simple Dockerfile for a TypeScript MCP server:

FROM node:20-slim

WORKDIR /app

# Copy package files first for better caching
COPY package*.json ./

# Install dependencies
RUN npm ci --only=production

# Copy source code
COPY . .

# Build TypeScript
RUN npm run build

# The MCP server communicates via stdio
CMD ["node", "dist/index.js"]

Python Server Dockerfile

FROM python:3.11-slim

WORKDIR /app

# Install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy source code
COPY . .

# Run the server
CMD ["python", "server.py"]

Multi-Stage Build (Smaller Images)

# Build stage
FROM node:20 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

# Production stage
FROM node:20-slim
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY package*.json ./
CMD ["node", "dist/index.js"]

3. Configuring Claude Desktop

To use a Docker-based server with Claude Desktop, configure it to run the container:

{
  "mcpServers": {
    "my-docker-server": {
      "command": "docker",
      "args": [
        "run",
        "-i",
        "--rm",
        "my-mcp-server:latest"
      ]
    }
  }
}

The -i Flag is Critical

The -i (interactive) flag keeps stdin open, which is required for MCP's stdio transport. Without it, the server won't receive messages from the client. The --rm flag automatically removes the container when it exits.

4. Mounting Volumes

For servers that need access to local files, mount volumes carefully:

{
  "mcpServers": {
    "filesystem-server": {
      "command": "docker",
      "args": [
        "run",
        "-i",
        "--rm",
        "-v", "/Users/me/documents:/data:ro",
        "filesystem-mcp:latest",
        "/data"
      ]
    }
  }
}

Read-Only Mounts

Use the :ro suffix to mount volumes as read-only whenever possible. This prevents the AI from accidentally modifying important files. Only use read-write mounts when the server genuinely needs to write data.

Multiple Volume Mounts

{
  "mcpServers": {
    "project-server": {
      "command": "docker",
      "args": [
        "run",
        "-i",
        "--rm",
        "-v", "/Users/me/projects:/projects:ro",
        "-v", "/Users/me/config:/config:ro",
        "-v", "/tmp/mcp-output:/output:rw",
        "project-mcp:latest"
      ]
    }
  }
}

5. Environment Variables

Pass secrets and configuration via environment variables:

{
  "mcpServers": {
    "api-server": {
      "command": "docker",
      "args": [
        "run",
        "-i",
        "--rm",
        "-e", "API_KEY=sk_live_xxx",
        "-e", "API_URL=https://api.example.com",
        "-e", "LOG_LEVEL=info",
        "api-mcp-server:latest"
      ]
    }
  }
}

Using Environment Files

For many variables, use an env file:

# .env.mcp
API_KEY=sk_live_xxx
DATABASE_URL=postgresql://localhost/db
REDIS_URL=redis://localhost:6379
{
  "mcpServers": {
    "api-server": {
      "command": "docker",
      "args": [
        "run",
        "-i",
        "--rm",
        "--env-file", "/path/to/.env.mcp",
        "api-mcp-server:latest"
      ]
    }
  }
}

6. Network Configuration

By default, Docker containers have network access. For security, you can restrict this:

No Network Access

docker run -i --rm --network none my-mcp-server

Host Network (Access Local Services)

docker run -i --rm --network host my-mcp-server

Custom Network

# Create a network
docker network create mcp-network

# Run server on that network
docker run -i --rm --network mcp-network my-mcp-server

7. Using Docker Compose

For servers that depend on other services (like databases), use Docker Compose:

# docker-compose.yml
version: '3.8'

services:
  mcp-server:
    build: .
    stdin_open: true
    tty: true
    depends_on:
      - postgres
      - redis
    environment:
      - DATABASE_URL=postgres://user:pass@postgres:5432/db
      - REDIS_URL=redis://redis:6379
    volumes:
      - ./data:/data:ro
  
  postgres:
    image: postgres:15
    environment:
      - POSTGRES_PASSWORD=pass
      - POSTGRES_USER=user
      - POSTGRES_DB=db
    volumes:
      - postgres_data:/var/lib/postgresql/data
  
  redis:
    image: redis:7-alpine

volumes:
  postgres_data:

Configure Claude to use docker-compose:

{
  "mcpServers": {
    "db-server": {
      "command": "docker-compose",
      "args": [
        "-f", "/path/to/docker-compose.yml",
        "run",
        "--rm",
        "-T",
        "mcp-server"
      ]
    }
  }
}

8. Security Best Practices

Docker Security Checklist

  • ✓ Use specific version tags, not :latest
  • ✓ Run as non-root user inside container
  • ✓ Mount volumes read-only when possible
  • ✓ Disable network access if not needed
  • ✓ Set resource limits (memory, CPU)
  • ✓ Use multi-stage builds for smaller images
  • ✓ Scan images for vulnerabilities
  • ✓ Don't store secrets in images

Run as Non-Root User

FROM node:20-slim

# Create non-root user
RUN useradd -m -s /bin/bash mcpuser

WORKDIR /app
COPY --chown=mcpuser:mcpuser . .

# Switch to non-root user
USER mcpuser

CMD ["node", "dist/index.js"]

Set Resource Limits

docker run -i --rm \
  --memory=512m \
  --cpus=1 \
  --pids-limit=100 \
  my-mcp-server

Drop Capabilities

docker run -i --rm \
  --cap-drop=ALL \
  --security-opt=no-new-privileges \
  my-mcp-server

9. Performance Optimization

Startup Time

Container startup adds latency. Optimize with:

  • Use slim base images (node:20-slim, python:3.11-slim)
  • Multi-stage builds to reduce image size
  • Pre-pull images: docker pull my-mcp-server:latest

Layer Caching

Order Dockerfile commands for optimal caching:

# Dependencies change less often - cache this layer
COPY package*.json ./
RUN npm ci

# Source changes more often - this layer rebuilds
COPY . .
RUN npm run build

Image Size Comparison

# Check image sizes
docker images | grep mcp

# Example output:
# my-mcp-server    latest    150MB  (with multi-stage)
# my-mcp-server    dev       850MB  (without optimization)

Conclusion

Docker provides a robust way to run MCP servers securely and consistently. While it adds some complexity, the benefits of isolation, reproducibility, and security make it worthwhile for production deployments and security-sensitive use cases. Start with a basic setup and add security hardening as needed.

Needs Review

Last updated

January 12, 2025

374 days ago

This content may be outdated

This content may contain outdated information. Please verify details before use.