Components Architecture#
Detailed architecture of the three sacred layers that compose the MCP OAuth Gateway.
Component Hierarchy#
Layer 1: Traefik (Divine Router)#
Core Responsibilities#
Traefik serves as the single entry point for all external traffic:
SSL Termination: Automatic HTTPS via Let’s Encrypt
Request Routing: Path and host-based routing with priorities
Authentication Enforcement: ForwardAuth middleware
Load Balancing: Distribute requests across service instances
Service Discovery: Automatic via Docker labels
Configuration Components#
Entrypoints#
entrypoints:
web:
address: ":80"
http:
redirections:
entrypoint:
to: websecure
scheme: https
websecure:
address: ":443"
Routers#
# Priority-based routing
routers:
auth-oauth:
rule: "PathPrefix(`/register`)"
priority: 4 # Highest
mcp-service:
rule: "Host(`service.domain`)"
priority: 2
middlewares:
- mcp-auth@file
Middlewares#
middlewares:
mcp-auth:
forwardAuth:
address: "http://auth:8000/verify"
authResponseHeaders:
- X-User-Id
- X-User-Name
Scaling Considerations#
Stateless design enables horizontal scaling
Shared certificate storage via volume
Configuration via Docker labels
Health checks built-in
Layer 2: Auth Service (OAuth Oracle)#
Core Components#
OAuth Server (Authlib)#
OAuth 2.1 implementation
PKCE enforcement
Token introspection
Token revocation
Dynamic Registration (RFC 7591)#
# Public registration endpoint
@app.post("/register")
async def register_client(request: ClientRegistration):
client_id = generate_client_id()
client_secret = generate_client_secret()
registration_token = generate_registration_token()
# Store in Redis
return ClientRegistrationResponse(...)
JWT Service#
# Token generation with RS256
def generate_token(user_id: str, client_id: str):
payload = {
"iss": f"https://auth.{BASE_DOMAIN}",
"sub": f"github|{user_id}",
"aud": client_id,
"exp": time() + ACCESS_TOKEN_LIFETIME,
"jti": generate_jti()
}
return jwt.encode(payload, private_key, algorithm="RS256")
GitHub Integration#
OAuth application for user authentication
User allowlist enforcement
Profile data extraction
Redis Storage Architecture#
Data Models#
# Client Registration
oauth:client:{client_id} = {
"client_id": str,
"client_secret": str,
"client_name": str,
"redirect_uris": List[str],
"registration_access_token": str,
"created_at": datetime,
"expires_at": datetime
}
# Access Token
oauth:token:{jti} = {
"client_id": str,
"user_id": str,
"scope": str,
"issued_at": datetime,
"expires_at": datetime
}
# User Token Index
oauth:user_tokens:{username} = Set[jti]
TTL Management#
State tokens: 5 minutes
Authorization codes: 10 minutes (or 1 year for long-lived)
Access tokens: 30 days
Refresh tokens: 1 year
Client registrations: 90 days (or eternal)
Layer 3: MCP Services#
Proxy Pattern Architecture#
Components#
┌─────────────────────────────────────┐
│ mcp-streamablehttp-proxy │
├─────────────────────────────────────┤
│ HTTP Server (FastAPI) │
│ Session Manager │
│ Process Manager │
│ Message Router │
├─────────────────────────────────────┤
│ stdio subprocess │
│ (Official MCP Server) │
└─────────────────────────────────────┘
Session Management#
sessions = {
"session-id": {
"process": subprocess.Popen(...),
"created_at": datetime,
"last_activity": datetime,
"message_queue": Queue()
}
}
Message Flow#
HTTP request received
Session lookup/creation
Forward to subprocess stdin
Read from subprocess stdout
Return as SSE stream
Native Pattern Architecture#
Components#
┌─────────────────────────────────────┐
│ Native StreamableHTTP Server │
├─────────────────────────────────────┤
│ FastAPI Application │
│ MCP Protocol Handler │
│ Tool Implementations │
│ Direct HTTP Responses │
└─────────────────────────────────────┘
Protocol Implementation#
@app.post("/mcp")
async def handle_mcp(request: Request):
body = await request.json()
if body["method"] == "initialize":
return handle_initialize(body)
elif body["method"] == "tools/call":
return handle_tool_call(body)
# ... other methods
Inter-Component Communication#
Network Architecture#
All components communicate via Docker network:
networks:
public:
driver: bridge
internal: false
Service Discovery#
Services discover each other by name:
http://auth:8000
http://redis:6379
http://mcp-service:3000
Security Boundaries#
External → Traefik (HTTPS required)
↓
Traefik → Auth (Internal HTTP)
↓
Auth → Redis (Internal Redis protocol)
↓
Traefik → MCP Services (Internal HTTP)
Component Lifecycle#
Startup Sequence#
Redis starts first (no dependencies)
Auth waits for Redis health
Traefik starts (no hard dependencies)
MCP Services start in parallel
Health Checks#
Each component implements health checks:
# Redis
test: ["CMD", "redis-cli", "ping"]
# Auth
test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
# Traefik
test: ["CMD", "traefik", "healthcheck"]
# MCP Services
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
Graceful Shutdown#
Components handle SIGTERM:
Stop accepting new requests
Complete in-flight requests
Close connections cleanly
Exit with status 0
Storage Volumes#
volumes:
traefik-certificates: # Persistent
redis-data: # Persistent
auth-keys: # Persistent
logs: # Rotated
Logging Structure#
Log Organization#
Centralized logging structure:
logs/
├── traefik/
│ ├── access.log
│ └── traefik.log
├── auth/
│ └── app.log
└── {service}/
└── service.log
Critical Check Points#
SSL certificate expiry
Redis memory usage
Auth service errors
Service health failures