Requirements

The requirements are described in the following sections.

Software Requirements

The software requirements are shown in the following table:

Software Minimum Version Purpose
Terraform >= 1.5.7 Infrastructure provisioning
kubectl >= 1.27 Kubernetes cluster management
Azure CLI Latest Azure authentication and management
jq Latest JSON processing for verification scripts

Note

versions.tf does not set required_version for Terraform, so the minimum version above is a recommended guideline, not an enforced constraint.

Terraform Providers (installed automatically by terraform init):

Provider Version Constraint Purpose
azurerm ~>4.21.0 Azure Resource Manager resources
azapi ~>2.8 Cross-region storage firewall reads (data.azapi_resource) and updates (azapi_update_resource)
kubernetes >=2.38.0 Kubernetes resources (namespaces, secrets)
helm >=2.17.0 Helm chart deployments (LogScale, operators)
random >=3.6.1 Random name prefixes, encryption key generation
archive >=2.4.0 DR failover function zip packaging
Azure Account Requirements
  • Azure subscription with billing enabled

  • Access to two Azure regions (for example, eastus and westus)

  • Azure Functions enabled

  • Azure Monitor enabled

  • Azure DNS enabled (if managing DNS)

  • Azure Traffic Manager enabled

RBAC Permissions

Required roles: Contributor, Storage Blob Data Reader, AKS Cluster Admin, DNS Zone Contributor, Monitoring Contributor. Terraform automatically creates cross-region RBAC assignments for DR.

For the full RBAC table and cross-resource-group access details, see RBAC Permissions.

Infrastructure Prerequisites

The following infrastructure is created automatically by module.azure-core during deployment:

  • Virtual Network with dedicated subnets in each region

  • Network Security Groups allowing internal cluster communication

  • NAT Gateway configured for outbound internet access from private nodes

User action required before deployment:

  • SSL/TLS certificates available, or plan to use cert-manager (deployed automatically by module.logscale)

  • Sufficient Azure quota for VM cores, public IPs, and storage accounts in both regions. Verify quota before deploying:

    shell
    # Check vCPU quota for a specific VM family in a region
    az vm list-usage --location <region> -o table | grep -i "Standard LSv3"
    # Check overall vCPU limits
    az vm list-usage --location <region> -o table | grep -i "Total Regional vCPUs"
  • Availability zone support varies by region and VM SKU. For example, Standard_L8s_v3 may support zones [1, 2, 3] in one region but only [1, 3] in another. Verify zone availability for your chosen VM SKU in each region using az vm list-skus --location <region> --size <sku> --output table and set azure_availability_zones accordingly in each tfvars file

Storage Container Naming Convention

Container names include a random prefix and differ between clusters. Always use terraform output -raw storage_acct_container_name โ€” never hardcode or guess names.

For naming details, see Storage Container Naming.

Encryption Key Requirements

Primary generates the key; secondary receives it via remote state. Both clusters must use the same key for DR recovery.

For the full encryption key architecture, see Encryption Key Architecture.

Pre-Deployment Checklist

The following is your pre-deployment checklist:

  • Terraform backend (Azure Storage state container) accessible from both regions. If not provisioned: Follow Terraform Backend and State File Setup in the next section to create the backend resources.

  • Terraform workspace list shows primary and secondary workspaces

  • Azure identity configured and authenticated:

    shell
    az account show
    terraform version
  • Sufficient Azure quota for VM cores, public IPs, and storage accounts in both regions

  • Azure Functions and Azure Monitor enabled in the subscription (required for the DR failover function on the standby cluster)

  • For standby deployment: primary encryption key available via remote state or existing_storage_encryption_key

Kubernetes Access

Terraform does not require a kubeconfig file โ€” the Kubernetes and Helm providers read AKS credentials directly from module.azure-kubernetes outputs. Cluster-specific kubeconfig files are auto-generated on terraform apply as kubeconfig-<aks-cluster-name>.yaml (git-ignored).

shell
export KUBECONFIG=$(terraform output -raw kubeconfig_path)
kubectl get nodes

Note

Namespace: This guide uses logging as the LogScale Kubernetes namespace. If your deployment uses a different namespace (configured via logscale_cluster_k8s_namespace_name in your tfvars), substitute it in every kubectl -n logging command. The ingress controller namespace follows the pattern <namespace>-ingress (e.g., logging-ingress). Check your tfvars or run kubectl get namespaces to confirm.