Tailscale - Deployment
Add under spec.tailscale in the cluster definition:
spec:
tailscale:
enabled: true
oauth_client_id: your-oauth-client-id
oauth_client_secret_payload: kms-encrypted-secret
replicas: 2
extra_routes:
- "192.168.248.0/24"
cpu_requests: "100m"
memory: "128Mi"| Parameter | Type | Default | Description |
|---|---|---|---|
enabled | boolean | false | Enable or disable Tailscale integration |
oauth_client_id | string | - | Tailscale OAuth Client ID |
oauth_client_secret_payload | string | - | KMS-encrypted OAuth Client Secret with the k8s_stack=secrets context |
replicas | integer | 2 | Number of Tailscale connector pods |
extra_routes | list(string) | [] | Additional CIDR blocks to advertise |
exit_node_enabled | boolean | false | Enable exit node functionality |
cpu_requests | string | "100m" | CPU request per pod |
memory | string | "128Mi" | Memory limit per pod |
Adding Custom Routes
To advertise additional network ranges (e.g., peered VPCs, on-premises networks):
spec:
tailscale:
extra_routes:
- "192.168.0.0/16" # On-premises network
- "172.31.0.0/16" # Peered VPCEnabling Exit Node (SKS-Managed Only)
To use the cluster as an exit node for all traffic:
spec:
sksMgmt:
tailscale:
exit_node_enabled: trueResource Tuning
Adjust resource requests based on your traffic patterns:
spec:
tailscale:
cpu_requests: "200m" # Increase for high-throughput scenarios
memory: "256Mi" # Increase for many simultaneous connections
replicas: 3 # Add more replicas for higher availabilityThe Vertical Pod Autoscaler will adjust these values over time based on actual usage.
Last updated on