Cluster Access
This section explains how to reach both private OKE clusters safely and manage kubeconfig contexts.
After deploying module.bastion and module.oke
(Phase 1), both OKE clusters are private and require SSH
tunnels via OCI Bastion Service for access.
Create and use a local kubeconfig-dr.yaml (gitignored) to manage oci-primary
/ oci-secondary contexts.
Prerequisites
OCI CLI installed and configured with a profile matching
config_file_profile(default: DEFAULT)SSH keys configured (
~/.ssh/id_ed25519and~/.ssh/id_ed25519.pub)Phase 1 infrastructure deployed (
module.bastion+module.oke)
Kubernetes API Access Modes
OKE clusters can be configured with two different access modes for the Kubernetes API. Choose based on your security requirements:
| Access Mode | provision_bastion | endpoint_public_access | Use Case |
|---|---|---|---|
| Bastion Tunnel | true | false | Production clusters with strict network isolation |
| Public Endpoint | false | true | Development/testing or when direct access is acceptable |
| Feature | Bastion Tunnel | Public Endpoint |
|---|---|---|
| Network exposure | Private only (VCN) | Public internet (IP-restricted) |
kubernetes_api_host | Required (tunnel URL) | Auto-detected from kubeconfig |
| SSH tunnel required | Yes | No |
| Terraform commands | Need -var="kubernetes_api_host=..." | No extra variables needed |
| Security | Higher (no public exposure) | Medium (IP allowlist via control_plane_allowed_cidrs) |
Important
The kubernetes_api_host variable should only be
set when using bastion tunnel mode (provision_bastion=true).
When using public endpoint mode (endpoint_public_access=true),
do not set this variable - the Kubernetes
and Helm providers will automatically discover the cluster's public
endpoint from the OCI-generated kubeconfig.
# tfvars for bastion tunnel access
provision_bastion = true
endpoint_public_access = false
kubernetes_api_host = "https://127.0.0.1:16443" # Tunnel port
# Required: CIDRs allowed to connect to bastion
bastion_client_allow_list = [
"203.176.185.0/24", # Your office IP range
"147.161.213.0/24" # VPN range
]Terraform usage with bastion:
# 1. Start bastion tunnel (separate terminal)
LOCAL_PORT=16443 ./scripts/setup-bastion-tunnel.sh --workspace primary kubectl
# 2. Run terraform with kubernetes_api_host
terraform apply -var-file=primary-us-chicago-1.tfvars -var="kubernetes_api_host=https://127.0.0.1:16443"# tfvars for public endpoint access
provision_bastion = false
endpoint_public_access = true
# kubernetes_api_host is NOT needed - auto-detected from kubeconfig
# Required: CIDRs allowed to access K8s API (port 6443)
control_plane_allowed_cidrs = [
"203.176.185.0/24", # Your office IP range
"147.161.213.0/24" # VPN range
]Terraform usage with public endpoint:
# No tunnel needed - direct access
terraform apply -var-file=single-us-chicago-1.tfvarsNote
When endpoint_public_access=true and provision_bastion=false,
the Kubernetes and Helm providers automatically discover the cluster's public endpoint from
the OCI kubeconfig. You do not need to set
kubernetes_api_host.
Establishing SSH Tunnels
Both OKE clusters are deployed with private API endpoints, meaning the Kubernetes API server is not directly accessible from the internet.
To interact with the clusters via kubectl or Terraform, you must establish an SSH tunnel through the OCI Bastion Service.
The bastion tunnel creates a secure, encrypted connection from your local computer to the OKE cluster's private API endpoint:
Local machine โ (SSH over port 22) โ OCI Bastion Service โ (private network) โ OKE API Server
The tunnel forwards a local port (e.g., 16443) to the cluster's internal API endpoint (port 6443)
kubectl and Terraform then connect to
https://127.0.0.1:<local-port>which is transparently forwarded to the private cluster
Run these commands in separate terminals to establish tunnels to both clusters:
# Terminal 1: Primary cluster tunnel (port 16443)
LOCAL_PORT=16443 ./scripts/setup-bastion-tunnel.sh --workspace primary kubectl
# Terminal 2: Secondary cluster tunnel (port 16444)
LOCAL_PORT=16444 ./scripts/setup-bastion-tunnel.sh --workspace secondary kubectlUsing kubectl
Set the kubeconfig (required for all kubectl commands):
export KUBECONFIG=$(pwd)/kubeconfig-dr.yaml # local file (gitignored)Access clusters:
# Access PRIMARY cluster (tunnel on port 16443)
kubectl --context oci-primary get pods -n logging
# Access SECONDARY cluster (tunnel on port 16444)
kubectl --context oci-secondary get pods -n logging
# Switch default context
kubectl config use-context oci-primary
kubectl get pods -n logging # Uses primary
kubectl config use-context oci-secondary
kubectl get pods -n logging # Uses secondaryVerify tunnels are running:
lsof -i :16443 # Primary tunnel
lsof -i :16444 # Secondary tunnelKubeconfig Management
| Context | Cluster | Tunnel Port | Default Namespace |
|---|---|---|---|
| oci-primary | dr-primary | 16443 | default |
| oci-secondary | dr-secondary | 16444 | default |
Note: The kubeconfig uses OCI CLI exec credential plugin to automatically generate and refresh tokens.
Creating/Updating kubeconfig-dr.yaml If kubeconfig-dr.yaml doesn't exist or needs to be regenerated
(note: it is intentionally gitignored):
# 1. Get cluster IDs from Terraform
terraform workspace select primary
PRIMARY_CLUSTER_ID=$(terraform output -raw cluster_id)
terraform workspace select secondary
SECONDARY_CLUSTER_ID=$(terraform output -raw cluster_id)
# 2. Create the kubeconfig-dr.yaml file
OCI_CLI_PROFILE="${OCI_CLI_PROFILE:-DEFAULT}" # Must match your OCI CLI profile / config_file_profile
cat > kubeconfig-dr.yaml << EOF
apiVersion: v1
kind: Config
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://127.0.0.1:16443
name: oci-primary
- cluster:
insecure-skip-tls-verify: true
server: https://127.0.0.1:16444
name: oci-secondary
contexts:
- context:
cluster: oci-primary
user: oci-primary-user
name: oci-primary
- context:
cluster: oci-secondary
user: oci-secondary-user
name: oci-secondary
current-context: oci-primary
users:
- name: oci-primary-user
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
command: oci
args:
- ce
- cluster
- generate-token
- --cluster-id
- ${PRIMARY_CLUSTER_ID}
- --region
- us-chicago-1
- --profile
- ${OCI_CLI_PROFILE}
- --auth
- api_key
env:
- name: SUPPRESS_LABEL_WARNING
value: "True"
interactiveMode: IfAvailable
provideClusterInfo: false
- name: oci-secondary-user
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
command: oci
args:
- ce
- cluster
- generate-token
- --cluster-id
- ${SECONDARY_CLUSTER_ID}
- --region
- us-chicago-1
- --profile
- ${OCI_CLI_PROFILE}
- --auth
- api_key
env:
- name: SUPPRESS_LABEL_WARNING
value: "True"
interactiveMode: IfAvailable
provideClusterInfo: false
EOF| Purpose | Primary Port | Secondary Port |
|---|---|---|
| Alternative | 36443 | 36444 |
DR Simulation / kubeconfig-dr.yaml (default) | 16443 | 16444 |
| Verification/Debug | 26443 | 26444 |
When running SSH tunnels on non-default ports, override the
kubernetes_api_host variable:
# Primary cluster tunnel on port 16443
terraform workspace select primary
terraform plan -var-file=primary-us-chicago-1.tfvars -var="kubernetes_api_host=https://127.0.0.1:16443"
terraform apply -var-file=primary-us-chicago-1.tfvars -var="kubernetes_api_host=https://127.0.0.1:16443"
# Secondary cluster tunnel on port 16444
terraform workspace select secondary
terraform plan -var-file=secondary-us-chicago-1.tfvars -var="kubernetes_api_host=https://127.0.0.1:16444"
terraform apply -var-file=secondary-us-chicago-1.tfvars -var="kubernetes_api_host=https://127.0.0.1:16444"Primary Deployment (workspace: primary, dr="active")
# Initialize (one-time per workspace)
terraform workspace new primary || terraform workspace select primary
terraform init -backend-config=backend-configs/primary-oci.tfbackend
export TFVARS="primary-us-chicago-1.tfvars"
# =============================================================================
# PHASE 1: Infrastructure (no tunnel required)
# These modules create OCI resources only - no Kubernetes API access needed
# =============================================================================
# 1) Infra foundation
terraform apply -var-file="$TFVARS" -target=module.core -target=module.logscale-storage -auto-approve
# 2) OKE cluster + Bastion service
terraform apply -var-file="$TFVARS" -target=module.bastion -target=module.oke -auto-approve
# =============================================================================
# PHASE 2: Kubernetes resources (tunnel required)
# These modules require Kubernetes API access via bastion tunnel
# =============================================================================
# 3) Start bastion tunnel (in a separate terminal)
LOCAL_PORT=16443 ./scripts/setup-bastion-tunnel.sh --workspace primary kubectl
# 4) Set kubeconfig and verify cluster access
export KUBECONFIG=$(pwd)/kubeconfig-dr.yaml # local file (gitignored)
kubectl --context oci-primary get nodes
# 5) Deploy Kubernetes resources (tunnel must be running)
export K8S_API="https://127.0.0.1:16443"
terraform apply -var-file="$TFVARS" -target=module.pre-install -var="kubernetes_api_host=$K8S_API" -auto-approve
terraform apply -var-file="$TFVARS" -target=module.logscale.module.crds -var="kubernetes_api_host=$K8S_API" -auto-approve
terraform apply -var-file="$TFVARS" -target=module.logscale -var="kubernetes_api_host=$K8S_API" -auto-approve
# 6) DNS-01 webhook (REQUIRED when HTTP-01 is likely blocked)
# This enables certificate issuance when public_lb_cidrs blocks HTTP-01 validation and/or for DR standby pre-issuance.
terraform apply -var-file="$TFVARS" -target=module.cert-manager-oci-webhook -var="kubernetes_api_host=$K8S_API" -auto-approve
# 7) Global DNS (primary only, optional; requires ingress LB IP)
terraform apply -var-file="$TFVARS" -target=module.global-dns -var="kubernetes_api_host=$K8S_API" -auto-approvePrimary global DNS settings (tfvars):
manage_global_dns = truecreate_global_dns_zone = true(or false +global_dns_zone_idif the zone already exists)Recommended: set
secondary_remote_state_configso primary can readsecondary_ingest_lb_ipand include the secondary answer at apply time.
Certificate issuance requirements:
When
public_lb_cidrsrestricts load balancer access (recommended for security), Let's Encrypt HTTP-01 validation is blockedYou must apply module.cert-manager-oci-webhook to enable DNS-01 validation
Set
cert_dns01_provider = "oci"andcert_dns01_webhook_enabled = truein your tfvars (recommended: keepcert_dns01_webhook_mode = "auto")Without the webhook, the
logscale-dr.oci-dr.humio.netcertificate will remain in READY: False state
Standby Deployment (workspace: secondary, dr="standby")
# Initialize (one-time per workspace)
terraform workspace new secondary || terraform workspace select secondary
terraform init -backend-config=backend-configs/secondary-oci.tfbackend
export TFVARS="secondary-us-chicago-1.tfvars"
# =============================================================================
# PHASE 1: Infrastructure (no tunnel required)
# These modules create OCI resources only - no Kubernetes API access needed
# =============================================================================
# 1) Infra foundation
terraform apply -var-file="$TFVARS" -target=module.core -target=module.logscale-storage -auto-approve
# 2) OKE cluster + Bastion service
terraform apply -var-file="$TFVARS" -target=module.bastion -target=module.oke -auto-approve
# =============================================================================
# PHASE 2: Kubernetes resources (tunnel required)
# These modules require Kubernetes API access via bastion tunnel
# =============================================================================
# 3) Start bastion tunnel (in a separate terminal)
LOCAL_PORT=16444 ./scripts/setup-bastion-tunnel.sh --workspace secondary kubectl
# 4) Set kubeconfig and verify cluster access
export KUBECONFIG=$(pwd)/kubeconfig-dr.yaml # local file (gitignored)
kubectl --context oci-secondary get nodes
# 5) Deploy Kubernetes resources (tunnel must be running)
export K8S_API="https://127.0.0.1:16444"
terraform apply -var-file="$TFVARS" -target=module.pre-install -var="kubernetes_api_host=$K8S_API" -auto-approve
terraform apply -var-file="$TFVARS" -target=module.logscale.module.crds -var="kubernetes_api_host=$K8S_API" -auto-approve
terraform apply -var-file="$TFVARS" -target=module.logscale -var="kubernetes_api_host=$K8S_API" -auto-approve
# 6) DNS-01 webhook (REQUIRED when HTTP-01 is likely blocked)
# This enables certificate issuance when public_lb_cidrs blocks HTTP-01 validation and/or for DR standby pre-issuance.
# IMPORTANT: This MUST be deployed BEFORE module.dr-failover-function (see step 7)
terraform apply -var-file="$TFVARS" -target=module.cert-manager-oci-webhook -var="kubernetes_api_host=$K8S_API" -auto-approve
# 7) DR failover function (standby only)
# IMPORTANT: Requires TLS certificate from DNS-01 webhook (step 6) - do not deploy before the webhook
terraform apply -var-file="$TFVARS" -target=module.dr-failover-function -var="kubernetes_api_host=$K8S_API" -auto-approveStandby settings (tfvars):
manage_global_dns = false(important: avoid two states managing global DNS)cert_dns01_webhook_enabled = true(required for certificate issuance; recommended:cert_dns01_webhook_mode = "auto")primary_remote_state_configmust be set so standby can read:the primary encryption key output (for the standby secret)
primary bucket details (for
S3_RECOVER_FROM_*)primary steering policy IDs (so the function can update DNS)
Standby Readiness Checklist (Before Any DR Event)
Standby requires "everything ready except LogScale pods".
| Check | Command | Expected |
|---|---|---|
| Humio operator scaled to 0 | kubectl --context oci-secondary -n logging get deploy humio-operator | replicas: 0 |
| Kafka pods running | kubectl --context oci-secondary -n logging get pods | grep -E 'kafka|strimzi' | All pods Running |
| Ingress has external IP | kubectl --context oci-secondary -n logging-ingress get svc | EXTERNAL-IP assigned |
| TLS ready for global FQDN | kubectl --context oci-secondary -n logging get secret logscale-dr.oci-dr.humio.net | Secret exists |
HumioCluster has S3_RECOVER_FROM_* | kubectl --context oci-secondary -n logging get humiocluster -o yaml | grep S3_RECOVER | Env vars present |
| DR function exists | oci fn function list --application-id <app-id> --profile <profile> | Function listed |