Requirements
The requirements are described in the following sections.
Software Requirements
The software requirements are shown in the following table:
| Software | Minimum Version | Purpose |
|---|---|---|
| Terraform | >= 1.5.7 | Infrastructure provisioning |
| kubectl | >= 1.27 | Kubernetes cluster management |
| Azure CLI | Latest | Azure authentication and management |
| jq | Latest | JSON processing for verification scripts |
Azure Account Requirements
Azure subscription with billing enabled
Access to two Azure regions (for example,
eastusandwestus)Azure Functions enabled
Azure Monitor enabled
Azure DNS enabled (if managing DNS)
Azure Traffic Manager enabled
RBAC Permissions
The following Azure RBAC roles are required:
| Role | Purpose |
|---|---|
| Contributor | Resource creation and management |
| Storage Blob Data Reader | Cross-region storage access for DR |
| Azure Kubernetes Service Cluster Admin Role | AKS cluster management |
| DNS Zone Contributor | DNS record management |
| Monitoring Contributor | Alert policies and action groups |
Automatic RBAC for DR: When deploying a standby cluster
(dr="standby"), Terraform automatically creates a "Storage
Blob Data Reader" role assignment on the primary storage account for the
secondary cluster's AKS managed identity
(azurerm_role_assignment.dr_read_primary_storage).
LogScale still uses shared keys for storage access today, so this
role assignment is not required for LogScale authentication, but
it is created for parity/visibility and future-proofing.
Infrastructure Prerequisites
The main infrastructure requirements are listed here:
Virtual Network configured in each region
Network Security Groups allowing internal cluster communication
NAT Gateway configured for outbound internet access from private nodes
SSL/TLS certificates available or cert-manager configured
Kubernetes Access
The main Kubernetes Access requirements are listed here:
kubeconfig configured for both AKS clusters
Contexts named consistently (for example,
aks-<prefix>)RBAC permissions to manage deployments in the LogScale namespace
kubeconfig_pathcan be set in tfvars to specify a custom kubeconfig file instead of relying on the default~/.kube/config. This enables working with multiple clusters without modifying the default kubeconfig. If not set in tfvars, defaults to~/.kube/config. Must be an absolute path (tilde~expansion does not work in Terraform local-exec provisioners)
Storage Container Naming Convention
Storage container names are state-specific (they include a random prefix stored in state). Do not hardcode or guess these names; always use Terraform outputs.
| Cluster | Container Name Example |
|---|---|
| Primary | terraform output -raw storage_container_name |
| Secondary | terraform output -raw storage_container_name |
Encryption Key Requirements
The main Encryption Key requirements are listed here:
Primary generates the encryption key via the pre-install module on first deploy
Secondary receives the key via existing_storage_encryption_key variable (from primary's terraform output storage_encryption_key)
Keys are stored as Kubernetes secrets (
logscale-storage-encryption-key), never committed to version controlSame key must be used across both clusters for DR recovery
The encryption key is passed to LogScale via
AZURE_STORAGE_ENCRYPTION_KEYenvironment variable (secretKeyRef)
Pre-Deployment Checklist
The following is your pre-deployment checklist:
Terraform backend (Azure Storage state container) accessible from both regions. If not provisioned: Follow Terraform Backend and State File Setup in the next section to create the backend resources.
Terraform workspace list shows primary and secondary workspaces
Azure identity configured and authenticated:
shellaz account show terraform versionPrimary tfvars sets
dr_cross_region_storage_access = trueFor standby deployment: primary encryption key available via remote state or
existing_storage_encryption_key
Terraform Backend and Workspace Setup
The DR deployment uses separate Terraform state files for primary and secondary clusters. Both state files are stored in the same Azure Blob Storage backend but use different keys for complete isolation.
Backend Prerequisites
Create the Azure Storage resources for Terraform state if they do not already exist:
# 1. Create Resource Group for Terraform state
az group create --name terraform-state-rg --location centralus
# 2. Create Storage Account (name must be globally unique, 3-24 chars, lowercase alphanumeric)
az storage account create \
--name <unique_storage_account_name> \
--resource-group terraform-state-rg \
--location centralus \
--sku Standard_LRS \
--encryption-services blob \
--allow-blob-public-access false \
--min-tls-version TLS1_2
# 3. Create Blob Container for state files
az storage container create \
--name tfstate \
--account-name <storage_account_name>
# 4. (Optional) Enable versioning for state recovery
az storage account blob-service-properties update \
--account-name <storage_account_name> \
--resource-group terraform-state-rg \
--enable-versioning trueBackend Configuration
This repo uses partial backend configuration. Start with the example templates and copy them to your environment-specific backend configs.
Example backend-configs/production-primary.hcl:
cp backend-configs/example-primary.hcl backend-configs/production-primary.hcl
cp backend-configs/example-secondary.hcl backend-configs/production-secondary.hcl
# Edit the copied files with your Azure Resource Group + Storage Account values
Then update backend-configs/production-primary.hcl
and backend-configs/production-secondary.hcl with
your storage values.
Example backend-configs/production-primary.hcl:
resource_group_name = "terraform-state-rg"
storage_account_name = "<your_storage_account_name>"
container_name = "tfstate"
key = "logscale-azure-aks-primary.tfstate"
Example backend-configs/production-secondary.hcl:
resource_group_name = "terraform-state-rg"
storage_account_name = "<your_storage_account_name>"
container_name = "tfstate"
key = "logscale-azure-aks-secondary.tfstate"State File Layout:
Each cluster has its own state file:
| Cluster | Backend Config | State File Key |
|---|---|---|
| Primary |
production-primary.hcl
|
logscale-azure-aks-primary.tfstate
|
| Secondary |
production-secondary.hcl
|
logscale-azure-aks-secondary.tfstate
|
Workspace Creation
Each cluster requires initialization with its respective backend config.
Note
You must reinitialize when switching between clusters.
Deploy Primary Cluster:
# Initialize with primary backend config
terraform init -backend-config=backend-configs/production-primary.hcl
# Plan and apply primary cluster
terraform plan -var-file=primary-centralus.tfvars
terraform apply -var-file=primary-centralus.tfvarsDeploy Secondary Cluster:
# Reinitialize with secondary backend config
terraform init -backend-config=backend-configs/production-secondary.hcl -reconfigure
# Plan and apply secondary cluster
terraform plan -var-file=secondary-eastus2.tfvars
terraform apply -var-file=secondary-eastus2.tfvarssSwitching Between Clusters:
# To switch from secondary back to primary:
terraform init -backend-config=backend-configs/production-primary.hcl -reconfigure
# To switch from primary to secondary:
terraform init -backend-config=backend-configs/production-secondary.hcl -reconfigureRe-initialization - If you need to switch between backend configurations or reinitialize:
# Reconfigure backend (e.g., when changing storage accounts)
terraform init -backend-config=backend-configs/production-primary.hcl -reconfiguretfvars Safety Validation
A critical safety mechanism prevents applying the wrong tfvars file with
the wrong backend. Each tfvars file includes a
workspace_name variable that must match the expected
cluster context.
primary-centralus.tfvars:
workspace_name = "primary"
dr = "active"
# ... other variables
secondary-eastus2.tfvars:
workspace_name = "secondary"
dr = "standby"
# ... other variablesIf you attempt to apply the wrong tfvars file, Terraform will fail with an error:
ERROR: WORKSPACE MISMATCH DETECTED!
============================================================================
Current Terraform workspace: 'default'
workspace_name in tfvars: 'secondary'
To fix this, either:
1. Initialize with the correct backend config:
terraform init -backend-config=backend-configs/production-secondary.hcl -reconfigure
2. Or use the correct tfvars file for this backend:
terraform plan -var-file=primary-<region>.tfvars
============================================================================This validation prevents accidental destruction of resources by applying primary configuration to the secondary cluster or vice versa.
Authentication Options
The Azure backend supports multiple authentication methods:
| Method | Use Case | Configuration |
|---|---|---|
| Azure CLI | Development |
az login (no additional config)
|
| Service Principal | Automation |
Set ARM_CLIENT_ID,
ARM_CLIENT_SECRET,
ARM_TENANT_ID,
ARM_SUBSCRIPTION_ID
|
| Managed Identity | CI/CD |
Set ARM_USE_MSI=true,
ARM_SUBSCRIPTION_ID
|
| Azure AD Auth | RBAC-based |
Set use_azuread_auth = true in backend config
|
Terraform Deployment Sequence
Follow this order to apply Terraform safely and avoid dependency issues. The Azure deployment differs from OCI in that it does not require a bastion tunnel for Kubernetes API access (AKS clusters are typically accessible directly with authorized IP ranges).
Module Dependency Graph
![]() |
Deployment Phases
Phase 1: Infrastructure (No Kubernetes Required)
Deploy the core Azure infrastructure first. This does not require Kubernetes access.
# Initialize with primary backend config
terraform init -backend-config=backend-configs/production-primary.hcl
# Apply infrastructure modules only
terraform apply -var-file=primary-centralus.tfvars \
-target="module.azure-core" \
-target="module.azure-keyvault" \
-target="module.azure-kubernetes" \
-target="module.logscale-storage-account"Phase 2: Configure Kubernetes Access
After AKS is created, configure kubectl access:
# Get AKS credentials (adds context to ~/.kube/config)
az aks get-credentials \
--resource-group <resource-group-name> \
--name <aks-cluster-name> \
--overwrite-existing
# Verify access
kubectl get nodesPhase 3: Pre-install and LogScale Application
Deploy the pre-install module (namespace + encryption key) and LogScale application stack:
# Apply pre-install and logscale modules
terraform apply -var-file=primary-centralus.tfvars \
-target="module.pre-install" \
-target="module.logscale"Phase 4: DR Modules (Conditional)
Deploy DR-specific modules based on cluster role.
For Primary (active) cluster:
# Deploy global DNS and Traffic Manager (only if manage_global_dns=true)
terraform apply -var-file=primary-centralus.tfvars \
-target="module.global-dns"For Secondary (standby) cluster:
# Deploy DR failover function (only if dr_failover_function_enabled=true)
terraform apply -var-file=secondary-eastus2.tfvars \
-target="module.dr-failover-function"Phase 5: Full Apply (Final Verification)
After targeted applies succeed, run a full apply to ensure all resources are in sync:
terraform apply -var-file=primary-centralus.tfvars # or secondary-eastus2.tfvarsComplete Deployment Order Summary
| Phase | Primary Cluster | Secondary Cluster |
|---|---|---|
| 1 |
module.azure-core,
module.azure-keyvault,
module.azure-kubernetes,
module.logscale-storage-account
| Same |
| 2 |
az aks get-credentials
| Same |
| 3 |
module.logscale
| Same |
| 4 |
module.global-dns (if
manage_global_dns=true)
|
module.dr-failover-function (if
dr_failover_function_enabled=true)
|
| 5 | Full terraform apply | Full terraform apply |
Notes:
Backend/tfvars validation: Each tfvars file includes
workspace_namewhich is validated against expected values; a mismatch triggers an error to prevent applying the wrong configuration.Module dependencies: The
module.logscalecontainskubernetes_manifestresources that require the Kubernetes API to be reachable at plan time. Ensureaz aks get-credentialshas been run before planning this module.DR module conditions:
module.global-dnsonly deploys whendr="active"andmanage_global_dns=true.module.dr-failover-functiononly deploys whendr="standby"anddr_failover_function_enabled=true.
Manual Secret Copy Checklist
Terraform does not replicate LogScale application/integration secrets between clusters. Before you rely on DR, ensure any secrets referenced by LogScale exist in both clusters.
Common examples (environment/integration specific):
SSO (SAML/OIDC) client secrets/cert material
SMTP/Postmark credentials
Ingest/API tokens (if stored as Kubernetes secrets)
Any secrets referenced via extra_user_logscale_envvars (secretKeyRef)
BYO ingress certificate secret (if
use_own_certificate_for_ingress=true)
Copy pattern (only for application/integration secrets — do not copy cert-manager, webhook, or other cluster-internal secrets):
kubectl --context aks-primary -n logging get secret <name> -o yaml | \
kubectl --context aks-secondary -n logging apply -f -