Skip to content

Kubernetes Deployment

This guide provides complete Kubernetes manifests for deploying Authgent in production. The deployment includes:

  • 2 replicas with rolling updates
  • Resource limits and requests
  • Health checks (liveness and readiness probes)
  • TLS via Ingress
  • Signing key stored as a Kubernetes Secret
  • Non-sensitive configuration in a ConfigMap
  • Horizontal Pod Autoscaler

All manifests use the authgent namespace. Create it first:

Terminal window
kubectl create namespace authgent

Store the signing key and database credentials as Kubernetes Secrets:

secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: authgent-secrets
namespace: authgent
labels:
app.kubernetes.io/name: authgent
app.kubernetes.io/component: auth-server
type: Opaque
stringData:
# Database password
db-password: "your-secure-password-here"
data:
# Base64-encoded EC private key PEM
# Generate with: cat ec-private.pem | base64 | tr -d '\n'
ec-private.pem: LS0tLS1CRUdJTi4uLg==

Create the secret from files instead of inline data:

Terminal window
# Generate the signing key
openssl ecparam -genkey -name prime256v1 -noout -out ec-private.pem
# Create the secret from files
kubectl create secret generic authgent-secrets \
--namespace authgent \
--from-file=ec-private.pem=ec-private.pem \
--from-literal=db-password="your-secure-password-here"

Non-sensitive configuration:

configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: authgent-config
namespace: authgent
labels:
app.kubernetes.io/name: authgent
app.kubernetes.io/component: auth-server
data:
AUTHGENT_ISSUER: "https://auth.yourcompany.com"
AUTHGENT_SIGNING_KEY: "/keys/ec-private.pem"
AUTHGENT_SIGNING_ALG: "ES256"
AUTHGENT_PORT: "8080"
AUTHGENT_LOG_LEVEL: "info"
AUTHGENT_LOG_FORMAT: "json"
AUTHGENT_TOKEN_TTL: "300"
AUTHGENT_REFRESH_TTL: "86400"
AUTHGENT_DCR_MODE: "constrained"
AUTHGENT_DCR_ALLOWED_REDIRECTS: "https://*.yourcompany.com/*"
AUTHGENT_KEY_ROTATION_DAYS: "30"
AUTHGENT_CORS_ORIGINS: "https://app.yourcompany.com"
AUTHGENT_RATE_LIMIT: "100"
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: authgent
namespace: authgent
labels:
app.kubernetes.io/name: authgent
app.kubernetes.io/component: auth-server
app.kubernetes.io/version: "0.1.0"
spec:
replicas: 2
selector:
matchLabels:
app.kubernetes.io/name: authgent
app.kubernetes.io/component: auth-server
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
template:
metadata:
labels:
app.kubernetes.io/name: authgent
app.kubernetes.io/component: auth-server
app.kubernetes.io/version: "0.1.0"
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9090"
prometheus.io/path: "/metrics"
spec:
serviceAccountName: authgent
securityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
containers:
- name: authgent
image: authgent/authgent:0.1.0
ports:
- name: http
containerPort: 8080
protocol: TCP
- name: metrics
containerPort: 9090
protocol: TCP
envFrom:
- configMapRef:
name: authgent-config
env:
- name: AUTHGENT_DB_DSN
valueFrom:
secretKeyRef:
name: authgent-secrets
key: db-password
- name: AUTHGENT_METRICS_PORT
value: "9090"
# Override DB_DSN to include the password from the secret
# In practice, use an init container or external-secrets-operator
# to construct the full DSN
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 256Mi
livenessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 10
periodSeconds: 15
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3
startupProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 5
periodSeconds: 5
failureThreshold: 12
volumeMounts:
- name: signing-key
mountPath: /keys
readOnly: true
volumes:
- name: signing-key
secret:
secretName: authgent-secrets
items:
- key: ec-private.pem
path: ec-private.pem
mode: 0400
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app.kubernetes.io/name: authgent
service.yaml
apiVersion: v1
kind: Service
metadata:
name: authgent
namespace: authgent
labels:
app.kubernetes.io/name: authgent
app.kubernetes.io/component: auth-server
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: authgent
app.kubernetes.io/component: auth-server
ports:
- name: http
port: 8080
targetPort: http
protocol: TCP
- name: metrics
port: 9090
targetPort: metrics
protocol: TCP
ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: authgent
namespace: authgent
labels:
app.kubernetes.io/name: authgent
app.kubernetes.io/component: auth-server
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "1m"
nginx.ingress.kubernetes.io/rate-limit: "100"
nginx.ingress.kubernetes.io/rate-limit-burst: "20"
spec:
ingressClassName: nginx
tls:
- hosts:
- auth.yourcompany.com
secretName: authgent-tls
rules:
- host: auth.yourcompany.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: authgent
port:
name: http
hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: authgent
namespace: authgent
labels:
app.kubernetes.io/name: authgent
app.kubernetes.io/component: auth-server
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: authgent
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
behavior:
scaleUp:
stabilizationWindowSeconds: 60
policies:
- type: Pods
value: 2
periodSeconds: 60
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Pods
value: 1
periodSeconds: 120
serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: authgent
namespace: authgent
labels:
app.kubernetes.io/name: authgent
app.kubernetes.io/component: auth-server

For production, use a managed PostgreSQL service (AWS RDS, GCP Cloud SQL, Azure Database for PostgreSQL). If you need to run PostgreSQL in-cluster:

postgres.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
namespace: authgent
spec:
serviceName: postgres
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: postgres
app.kubernetes.io/component: database
template:
metadata:
labels:
app.kubernetes.io/name: postgres
app.kubernetes.io/component: database
spec:
containers:
- name: postgres
image: postgres:16-alpine
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: authgent
- name: POSTGRES_USER
value: authgent
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: authgent-secrets
key: db-password
resources:
requests:
cpu: 250m
memory: 256Mi
limits:
cpu: "1"
memory: 512Mi
livenessProbe:
exec:
command:
- pg_isready
- -U
- authgent
- -d
- authgent
initialDelaySeconds: 10
periodSeconds: 10
volumeMounts:
- name: postgres-data
mountPath: /var/lib/postgresql/data
volumeClaimTemplates:
- metadata:
name: postgres-data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: Service
metadata:
name: postgres
namespace: authgent
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: postgres
app.kubernetes.io/component: database
ports:
- port: 5432
targetPort: 5432

When using in-cluster PostgreSQL, set the database DSN in the ConfigMap:

AUTHGENT_DB_DSN: "postgres://authgent:$(DB_PASSWORD)@postgres.authgent.svc.cluster.local:5432/authgent?sslmode=disable"
Terminal window
# Apply all manifests
kubectl apply -f serviceaccount.yaml
kubectl apply -f secret.yaml
kubectl apply -f configmap.yaml
kubectl apply -f postgres.yaml
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
kubectl apply -f ingress.yaml
kubectl apply -f hpa.yaml
# Or apply an entire directory
kubectl apply -f k8s/
# Verify the deployment
kubectl -n authgent get pods
kubectl -n authgent get svc
kubectl -n authgent get ingress
# Check logs
kubectl -n authgent logs -l app.kubernetes.io/name=authgent --tail=50
# Verify Authgent is responding
kubectl -n authgent port-forward svc/authgent 8080:8080
curl http://localhost:8080/.well-known/oauth-authorization-server | jq .
  1. Use a managed PostgreSQL — AWS RDS, GCP Cloud SQL, or Azure Database. Don’t run StatefulSets for databases in production unless you have a dedicated database team.
  2. Pin image tags — Use authgent/authgent:0.1.0, never latest in production.
  3. Enable TLS — Install cert-manager and use the Ingress TLS configuration above.
  4. Set resource limits — The defaults above are conservative. Monitor actual usage and adjust.
  5. Configure topology spread — The topologySpreadConstraints ensure replicas run on different nodes.
  6. External secrets — Use external-secrets-operator or Sealed Secrets instead of plain Kubernetes Secrets for the signing key.
  7. Network policies — Restrict traffic so only the Ingress controller and your MCP servers can reach Authgent.
  8. Backup — Configure automated PostgreSQL backups via your managed database service or Velero.
  9. Monitoring — Scrape the Prometheus metrics endpoint and set up alerts for error rates and latency.
  10. Pod Disruption Budget — Add a PDB to prevent all replicas from being evicted simultaneously during node maintenance:
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: authgent
namespace: authgent
spec:
minAvailable: 1
selector:
matchLabels:
app.kubernetes.io/name: authgent