Cluster Access

This section explains how to reach both private OKE clusters safely and manage kubeconfig contexts.

After deploying module.bastion and module.oke (Phase 1), both OKE clusters are private and require SSH tunnels via OCI Bastion Service for access.

Create and use a local kubeconfig-dr.yaml (gitignored) to manage oci-primary / oci-secondary contexts.

Prerequisites

  • OCI CLI installed and configured with a profile matching config_file_profile (default: DEFAULT)

  • SSH keys configured (~/.ssh/id_ed25519 and ~/.ssh/id_ed25519.pub)

  • Phase 1 infrastructure deployed (module.bastion + module.oke)

Kubernetes API Access Modes

OKE clusters can be configured with two different access modes for the Kubernetes API. Choose based on your security requirements:

Access Mode provision_bastion endpoint_public_access Use Case
Bastion TunneltruefalseProduction clusters with strict network isolation
Public EndpointfalsetrueDevelopment/testing or when direct access is acceptable
Access Mode Comparison
FeatureBastion TunnelPublic Endpoint
Network exposurePrivate only (VCN)Public internet (IP-restricted)
kubernetes_api_hostRequired (tunnel URL)Auto-detected from kubeconfig
SSH tunnel requiredYesNo
Terraform commandsNeed -var="kubernetes_api_host=..."No extra variables needed
SecurityHigher (no public exposure)Medium (IP allowlist via control_plane_allowed_cidrs)

Important

The kubernetes_api_host variable should only be set when using bastion tunnel mode (provision_bastion=true).

When using public endpoint mode (endpoint_public_access=true), do not set this variable - the Kubernetes and Helm providers will automatically discover the cluster's public endpoint from the OCI-generated kubeconfig.

Configuration: Bastion Tunnel Mode (Production)
terraform
# tfvars for bastion tunnel access
provision_bastion = true
endpoint_public_access = false
kubernetes_api_host = "https://127.0.0.1:16443" # Tunnel port
# Required: CIDRs allowed to connect to bastion
bastion_client_allow_list = [
"203.176.185.0/24", # Your office IP range
"147.161.213.0/24" # VPN range
]

Terraform usage with bastion:

shell
# 1. Start bastion tunnel (separate terminal)
LOCAL_PORT=16443 ./scripts/setup-bastion-tunnel.sh --workspace primary kubectl
# 2. Run terraform with kubernetes_api_host
terraform apply -var-file=primary-us-chicago-1.tfvars -var="kubernetes_api_host=https://127.0.0.1:16443"
Configuration: Public Endpoint Mode (Development)
terraform
# tfvars for public endpoint access
provision_bastion = false
endpoint_public_access = true
# kubernetes_api_host is NOT needed - auto-detected from kubeconfig
# Required: CIDRs allowed to access K8s API (port 6443)
control_plane_allowed_cidrs = [
"203.176.185.0/24", # Your office IP range
"147.161.213.0/24" # VPN range
]

Terraform usage with public endpoint:

shell
# No tunnel needed - direct access
terraform apply -var-file=single-us-chicago-1.tfvars

Note

When endpoint_public_access=true and provision_bastion=false, the Kubernetes and Helm providers automatically discover the cluster's public endpoint from the OCI kubeconfig. You do not need to set kubernetes_api_host.

Establishing SSH Tunnels

Both OKE clusters are deployed with private API endpoints, meaning the Kubernetes API server is not directly accessible from the internet.

To interact with the clusters via kubectl or Terraform, you must establish an SSH tunnel through the OCI Bastion Service.

The bastion tunnel creates a secure, encrypted connection from your local computer to the OKE cluster's private API endpoint:

  1. Local machine โ†’ (SSH over port 22) โ†’ OCI Bastion Service โ†’ (private network) โ†’ OKE API Server

  2. The tunnel forwards a local port (e.g., 16443) to the cluster's internal API endpoint (port 6443)

  3. kubectl and Terraform then connect to https://127.0.0.1:<local-port> which is transparently forwarded to the private cluster

Run these commands in separate terminals to establish tunnels to both clusters:

shell
# Terminal 1: Primary cluster tunnel (port 16443)
LOCAL_PORT=16443 ./scripts/setup-bastion-tunnel.sh --workspace primary kubectl
# Terminal 2: Secondary cluster tunnel (port 16444)
LOCAL_PORT=16444 ./scripts/setup-bastion-tunnel.sh --workspace secondary kubectl
Using kubectl

Set the kubeconfig (required for all kubectl commands):

shell
export KUBECONFIG=$(pwd)/kubeconfig-dr.yaml # local file (gitignored)

Access clusters:

shell
# Access PRIMARY cluster (tunnel on port 16443)
kubectl --context oci-primary get pods -n logging
# Access SECONDARY cluster (tunnel on port 16444)
kubectl --context oci-secondary get pods -n logging
# Switch default context
kubectl config use-context oci-primary
kubectl get pods -n logging # Uses primary
kubectl config use-context oci-secondary
kubectl get pods -n logging # Uses secondary

Verify tunnels are running:

shell
lsof -i :16443 # Primary tunnel
lsof -i :16444 # Secondary tunnel
Kubeconfig Management
Context Reference
ContextClusterTunnel PortDefault Namespace
oci-primarydr-primary16443default
oci-secondarydr-secondary16444default

Note: The kubeconfig uses OCI CLI exec credential plugin to automatically generate and refresh tokens.

Creating/Updating kubeconfig-dr.yaml

If kubeconfig-dr.yaml doesn't exist or needs to be regenerated (note: it is intentionally gitignored):

shell
# 1. Get cluster IDs from Terraform
terraform workspace select primary
PRIMARY_CLUSTER_ID=$(terraform output -raw cluster_id)
terraform workspace select secondary
SECONDARY_CLUSTER_ID=$(terraform output -raw cluster_id)
# 2. Create the kubeconfig-dr.yaml file
OCI_CLI_PROFILE="${OCI_CLI_PROFILE:-DEFAULT}" # Must match your OCI CLI profile / config_file_profile
cat > kubeconfig-dr.yaml << EOF
apiVersion: v1
kind: Config
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://127.0.0.1:16443
name: oci-primary
- cluster:
insecure-skip-tls-verify: true
server: https://127.0.0.1:16444
name: oci-secondary
contexts:
- context:
cluster: oci-primary
user: oci-primary-user
name: oci-primary
- context:
cluster: oci-secondary
user: oci-secondary-user
name: oci-secondary
current-context: oci-primary
users:
- name: oci-primary-user
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
command: oci
args:
- ce
- cluster
- generate-token
- --cluster-id
- ${PRIMARY_CLUSTER_ID}
- --region
- us-chicago-1
- --profile
- ${OCI_CLI_PROFILE}
- --auth
- api_key
env:
- name: SUPPRESS_LABEL_WARNING
value: "True"
interactiveMode: IfAvailable
provideClusterInfo: false
- name: oci-secondary-user
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
command: oci
args:
- ce
- cluster
- generate-token
- --cluster-id
- ${SECONDARY_CLUSTER_ID}
- --region
- us-chicago-1
- --profile
- ${OCI_CLI_PROFILE}
- --auth
- api_key
env:
- name: SUPPRESS_LABEL_WARNING
value: "True"
interactiveMode: IfAvailable
provideClusterInfo: false
EOF
Port Allocation Convention
PurposePrimary PortSecondary Port
Alternative3644336444
DR Simulation / kubeconfig-dr.yaml (default)1644316444
Verification/Debug2644326444
Terraform with Non-Default Tunnel Ports

When running SSH tunnels on non-default ports, override the kubernetes_api_host variable:

shell
# Primary cluster tunnel on port 16443
terraform workspace select primary
terraform plan -var-file=primary-us-chicago-1.tfvars -var="kubernetes_api_host=https://127.0.0.1:16443"
terraform apply -var-file=primary-us-chicago-1.tfvars -var="kubernetes_api_host=https://127.0.0.1:16443"
# Secondary cluster tunnel on port 16444
terraform workspace select secondary
terraform plan -var-file=secondary-us-chicago-1.tfvars -var="kubernetes_api_host=https://127.0.0.1:16444"
terraform apply -var-file=secondary-us-chicago-1.tfvars -var="kubernetes_api_host=https://127.0.0.1:16444"
Primary Deployment (workspace: primary, dr="active")
shell
# Initialize (one-time per workspace)
terraform workspace new primary || terraform workspace select primary
terraform init -backend-config=backend-configs/primary-oci.tfbackend
export TFVARS="primary-us-chicago-1.tfvars"
# =============================================================================
# PHASE 1: Infrastructure (no tunnel required)
# These modules create OCI resources only - no Kubernetes API access needed
# =============================================================================
# 1) Infra foundation
terraform apply -var-file="$TFVARS" -target=module.core -target=module.logscale-storage -auto-approve
# 2) OKE cluster + Bastion service
terraform apply -var-file="$TFVARS" -target=module.bastion -target=module.oke -auto-approve
# =============================================================================
# PHASE 2: Kubernetes resources (tunnel required)
# These modules require Kubernetes API access via bastion tunnel
# =============================================================================
# 3) Start bastion tunnel (in a separate terminal)
LOCAL_PORT=16443 ./scripts/setup-bastion-tunnel.sh --workspace primary kubectl
# 4) Set kubeconfig and verify cluster access
export KUBECONFIG=$(pwd)/kubeconfig-dr.yaml # local file (gitignored)
kubectl --context oci-primary get nodes
# 5) Deploy Kubernetes resources (tunnel must be running)
export K8S_API="https://127.0.0.1:16443"
terraform apply -var-file="$TFVARS" -target=module.pre-install -var="kubernetes_api_host=$K8S_API" -auto-approve
terraform apply -var-file="$TFVARS" -target=module.logscale.module.crds -var="kubernetes_api_host=$K8S_API" -auto-approve
terraform apply -var-file="$TFVARS" -target=module.logscale -var="kubernetes_api_host=$K8S_API" -auto-approve
# 6) DNS-01 webhook (REQUIRED when HTTP-01 is likely blocked)
# This enables certificate issuance when public_lb_cidrs blocks HTTP-01 validation and/or for DR standby pre-issuance.
terraform apply -var-file="$TFVARS" -target=module.cert-manager-oci-webhook -var="kubernetes_api_host=$K8S_API" -auto-approve
# 7) Global DNS (primary only, optional; requires ingress LB IP)
terraform apply -var-file="$TFVARS" -target=module.global-dns -var="kubernetes_api_host=$K8S_API" -auto-approve

Primary global DNS settings (tfvars):

  • manage_global_dns = true

  • create_global_dns_zone = true (or false + global_dns_zone_id if the zone already exists)

  • Recommended: set secondary_remote_state_config so primary can read secondary_ingest_lb_ip and include the secondary answer at apply time.

Certificate issuance requirements:

  • When public_lb_cidrs restricts load balancer access (recommended for security), Let's Encrypt HTTP-01 validation is blocked

  • You must apply module.cert-manager-oci-webhook to enable DNS-01 validation

  • Set cert_dns01_provider = "oci" and cert_dns01_webhook_enabled = true in your tfvars (recommended: keep cert_dns01_webhook_mode = "auto")

  • Without the webhook, the logscale-dr.oci-dr.humio.net certificate will remain in READY: False state

Standby Deployment (workspace: secondary, dr="standby")
shell
# Initialize (one-time per workspace)
terraform workspace new secondary || terraform workspace select secondary
terraform init -backend-config=backend-configs/secondary-oci.tfbackend
export TFVARS="secondary-us-chicago-1.tfvars"
# =============================================================================
# PHASE 1: Infrastructure (no tunnel required)
# These modules create OCI resources only - no Kubernetes API access needed
# =============================================================================
# 1) Infra foundation
terraform apply -var-file="$TFVARS" -target=module.core -target=module.logscale-storage -auto-approve
# 2) OKE cluster + Bastion service
terraform apply -var-file="$TFVARS" -target=module.bastion -target=module.oke -auto-approve
# =============================================================================
# PHASE 2: Kubernetes resources (tunnel required)
# These modules require Kubernetes API access via bastion tunnel
# =============================================================================
# 3) Start bastion tunnel (in a separate terminal)
LOCAL_PORT=16444 ./scripts/setup-bastion-tunnel.sh --workspace secondary kubectl
# 4) Set kubeconfig and verify cluster access
export KUBECONFIG=$(pwd)/kubeconfig-dr.yaml # local file (gitignored)
kubectl --context oci-secondary get nodes
# 5) Deploy Kubernetes resources (tunnel must be running)
export K8S_API="https://127.0.0.1:16444"
terraform apply -var-file="$TFVARS" -target=module.pre-install -var="kubernetes_api_host=$K8S_API" -auto-approve
terraform apply -var-file="$TFVARS" -target=module.logscale.module.crds -var="kubernetes_api_host=$K8S_API" -auto-approve
terraform apply -var-file="$TFVARS" -target=module.logscale -var="kubernetes_api_host=$K8S_API" -auto-approve
# 6) DNS-01 webhook (REQUIRED when HTTP-01 is likely blocked)
# This enables certificate issuance when public_lb_cidrs blocks HTTP-01 validation and/or for DR standby pre-issuance.
# IMPORTANT: This MUST be deployed BEFORE module.dr-failover-function (see step 7)
terraform apply -var-file="$TFVARS" -target=module.cert-manager-oci-webhook -var="kubernetes_api_host=$K8S_API" -auto-approve
# 7) DR failover function (standby only)
# IMPORTANT: Requires TLS certificate from DNS-01 webhook (step 6) - do not deploy before the webhook
terraform apply -var-file="$TFVARS" -target=module.dr-failover-function -var="kubernetes_api_host=$K8S_API" -auto-approve

Standby settings (tfvars):

  • manage_global_dns = false (important: avoid two states managing global DNS)

  • cert_dns01_webhook_enabled = true (required for certificate issuance; recommended: cert_dns01_webhook_mode = "auto")

  • primary_remote_state_config must be set so standby can read:

    • the primary encryption key output (for the standby secret)

    • primary bucket details (for S3_RECOVER_FROM_*)

    • primary steering policy IDs (so the function can update DNS)

Standby Readiness Checklist (Before Any DR Event)

Standby requires "everything ready except LogScale pods".

CheckCommandExpected
Humio operator scaled to 0kubectl --context oci-secondary -n logging get deploy humio-operatorreplicas: 0
Kafka pods runningkubectl --context oci-secondary -n logging get pods | grep -E 'kafka|strimzi'All pods Running
Ingress has external IPkubectl --context oci-secondary -n logging-ingress get svcEXTERNAL-IP assigned
TLS ready for global FQDNkubectl --context oci-secondary -n logging get secret logscale-dr.oci-dr.humio.netSecret exists
HumioCluster has S3_RECOVER_FROM_*kubectl --context oci-secondary -n logging get humiocluster -o yaml | grep S3_RECOVEREnv vars present
DR function existsoci fn function list --application-id <app-id> --profile <profile>Function listed