Tailscale - Deployment

Tailscale - Deployment

Add under spec.tailscale in the cluster definition:

spec:
  tailscale:
    enabled: true
    oauth_client_id: your-oauth-client-id
    oauth_client_secret_payload: kms-encrypted-secret
    replicas: 2
    extra_routes:
      - "192.168.248.0/24"
    cpu_requests: "100m"
    memory: "128Mi"
ParameterTypeDefaultDescription
enabledbooleanfalseEnable or disable Tailscale integration
oauth_client_idstring-Tailscale OAuth Client ID
oauth_client_secret_payloadstring-KMS-encrypted OAuth Client Secret with the k8s_stack=secrets context
replicasinteger2Number of Tailscale connector pods
extra_routeslist(string)[]Additional CIDR blocks to advertise
exit_node_enabledbooleanfalseEnable exit node functionality
cpu_requestsstring"100m"CPU request per pod
memorystring"128Mi"Memory limit per pod

Adding Custom Routes

To advertise additional network ranges (e.g., peered VPCs, on-premises networks):

spec:
  tailscale:
    extra_routes:
      - "192.168.0.0/16"    # On-premises network
      - "172.31.0.0/16"     # Peered VPC

Enabling Exit Node (SKS-Managed Only)

To use the cluster as an exit node for all traffic:

spec:
  sksMgmt:
    tailscale:
      exit_node_enabled: true

Resource Tuning

Adjust resource requests based on your traffic patterns:

spec:
  tailscale:
    cpu_requests: "200m"   # Increase for high-throughput scenarios
    memory: "256Mi"        # Increase for many simultaneous connections
    replicas: 3            # Add more replicas for higher availability

The Vertical Pod Autoscaler will adjust these values over time based on actual usage.

Last updated on