HyperRoute

Deployment Guide

HyperRoute ships as a single binary with no runtime dependencies. Deploy it anywhere — Kubernetes, Docker, or directly on bare metal.


Kubernetes

Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hyperroute
spec:
  replicas: 3
  selector:
    matchLabels:
      app: hyperroute
  template:
    metadata:
      labels:
        app: hyperroute
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "9091"
    spec:
      containers:
        - name: routerd
          image: hyperroute/routerd:latest
          ports:
            - containerPort: 4000
              name: http
            - containerPort: 9091
              name: metrics
          args:
            - serve
            - --config
            - /etc/hyperroute/router.yaml
          env:
            - name: RUST_LOG
              value: "info"
            - name: OTEL_EXPORTER_OTLP_ENDPOINT
              value: "http://tempo:4317"
          livenessProbe:
            httpGet:
              path: /health
              port: 4000
            initialDelaySeconds: 5
            periodSeconds: 10
          readinessProbe:
            httpGet:
              path: /health
              port: 4000
            initialDelaySeconds: 3
            periodSeconds: 5
          resources:
            requests:
              memory: "64Mi"
              cpu: "100m"
            limits:
              memory: "256Mi"
              cpu: "1000m"
          volumeMounts:
            - name: config
              mountPath: /etc/hyperroute
      volumes:
        - name: config
          configMap:
            name: hyperroute-config

Service

apiVersion: v1
kind: Service
metadata:
  name: hyperroute
spec:
  selector:
    app: hyperroute
  ports:
    - name: http
      port: 4000
      targetPort: 4000
    - name: metrics
      port: 9091
      targetPort: 9091

Key Points

  • Replicas: Start with 3, scale based on CPU usage
  • Health probes: Use /health for both liveness and readiness
  • Prometheus scraping: Annotate pods for automatic discovery on port 9091
  • Resource requests: 64Mi memory / 100m CPU is sufficient for most workloads
  • Resource limits: 256Mi memory handles 10K+ concurrent requests

Docker Compose (Development)

A complete local development stack with monitoring:

version: "3.8"
services:
  routerd:
    image: hyperroute/routerd:latest
    ports:
      - "4000:4000"
      - "9091:9091"
    volumes:
      - ./router.yaml:/etc/hyperroute/router.yaml
    command: ["serve", "--config", "/etc/hyperroute/router.yaml"]
    environment:
      RUST_LOG: debug
      OTEL_EXPORTER_OTLP_ENDPOINT: http://tempo:4317

  prometheus:
    image: prom/prometheus:latest
    ports:
      - "9090:9090"
    volumes:
      - ./observability/prometheus.yml:/etc/prometheus/prometheus.yml

  grafana:
    image: grafana/grafana:latest
    ports:
      - "3000:3000"
    environment:
      GF_SECURITY_ADMIN_PASSWORD: admin

  tempo:
    image: grafana/tempo:latest
    ports:
      - "4317:4317"    # OTLP gRPC
      - "3200:3200"    # Tempo query

This gives you:

  • HyperRoute at http://localhost:4000 (with playground)
  • Prometheus at http://localhost:9090
  • Grafana at http://localhost:3000 (admin/admin)
  • Tempo at http://localhost:3200 (tracing)

systemd (Bare Metal)

[Unit]
Description=HyperRoute GraphQL Router
After=network.target

[Service]
Type=simple
User=hyperroute
Group=hyperroute
ExecStart=/usr/local/bin/routerd serve --config /etc/hyperroute/router.yaml
Restart=always
RestartSec=5
LimitNOFILE=65535

Environment=RUST_LOG=info
Environment=OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317

[Install]
WantedBy=multi-user.target

Install and enable:

sudo cp routerd /usr/local/bin/
sudo cp hyperroute.service /etc/systemd/system/
sudo systemctl enable hyperroute
sudo systemctl start hyperroute

Graceful Shutdown

HyperRoute supports graceful shutdown with configurable drain timeout:

  1. SIGTERM/SIGINT received
  2. Stop accepting new connections
  3. Wait for in-flight requests to complete (up to drain_timeout_secs)
  4. Force shutdown after force_shutdown_after_secs
  5. Flush OTLP spans before exiting
shutdown:
  drain_timeout_secs: 30
  force_shutdown_after_secs: 35
  log_active_requests: true

Production Checklist

Before going live, ensure you have:

  • Health checks configured for your orchestration platform
  • Metrics being scraped by Prometheus (port 9091)
  • Alerting rules for error rates and latency
  • Distributed tracing for debugging (Jaeger/Tempo)
  • Layered caching enabled (L1 memory + L2 Redis)
  • Graceful shutdown configured with appropriate drain timeout
  • Resource limits set for memory and CPU
  • Introspection disabled if not needed (security.enable_introspection: false)
  • Persisted operations enabled for known clients
  • File descriptor limits increased (LimitNOFILE=65535)

Performance Tuning

SettingDevelopmentProduction
server.workersautoauto (matches CPU count)
cache.backendmemorylayered
observability.tracing.sample_rate1.00.1 (10%)
RUST_LOGdebuginfo
shutdown.drain_timeout_secs530

Next Steps