Requirements
This section describes all the requirements that must be in place before deploying the DR infrastructure.
Software Requirements
| Component | Minimum Version | Purpose |
|---|---|---|
| Terraform | >= 1.5 | Infrastructure provisioning |
| kubectl | >= 1.25 | Kubernetes cluster management |
| AWS CLI | >= 2.0 | AWS resource access and authentication |
| Helm | >= 3.10 | Kubernetes package management |
| jq | >= 1.6 | JSON processing for verification scripts |
IAM Permissions
The deploying user/role requires permissions for EKS, S3, Route53, Lambda, IAM, SNS, CloudWatch, KMS, and SSM. For the full permissions list, see IAM Permissions Reference.
Deterministic S3 Bucket Naming
Both clusters need predictable S3 bucket names so IAM cross-region policies can be created before the peer cluster exists. For the naming convention, enforcement options, and remote state details, see Deterministic S3 Bucket Naming.
Encryption Key Requirements
The secondary cluster requires access to the primary's S3 encryption key:
| Method | Configuration | Use Case |
|---|---|---|
| Remote State (Recommended) |
primary_remote_state_config block in secondary
tfvars
| S3 backend — automatic sync from primary workspace |
| TFE Outputs | tfe_organization, tfe_primary_workspace + TFE_TOKEN | Primary managed in TFC/TFE |
| Explicit Key |
existing_s3_encryption_key variable
| Manual key management |
Primary generates the encryption key on first deployment and exports it as a sensitive Terraform output
Secondary retrieves the key via TFE outputs (or remote state / explicit value)
Key is stored in <cluster-name>-s3-storage-encryption Kubernetes secret
Same key must be used across both clusters for DR recovery
Note
Standby apply will fail if the encryption key is not available. The pre-install module enforces this precondition.
Terraform Backend Configuration
The repository uses workspace-aware S3 backend configuration with separate
.hcl files for primary and secondary clusters. This
allows independent state management for each DR cluster.
Backend Configuration Files:
| File | Purpose |
|---|---|
backend.tf
| Declares empty S3 backend (values loaded from backend config files) |
backend-configs/primary-aws.hcl
| Backend config for primary workspace |
backend-configs/secondary-aws.hcl
| Backend config for secondary workspace |
backend-configs/example.hcl
| Template for creating new backend configs |
Example backend-configs/primary-aws.hcl:
# Backend configuration for PRIMARY workspace
bucket = "logscale-tf-backend"
region = "us-west-2"
key = "env:/logscale-aws-eks"
profile = "your-aws-profile"
encrypt = true
S3 State File Layout: With the above config and workspace primary, the
actual state path in S3 is
env:/primary/env:/logscale-aws-eks. Terraform's S3
backend automatically prepends env:/<workspace>/
when using non-default workspaces.
Initialization Workflow:
# Primary cluster
terraform workspace select primary # or: terraform workspace new primary
terraform init -backend-config=backend-configs/primary-aws.hcl
# Secondary cluster
terraform workspace select secondary # or: terraform workspace new secondary
terraform init -backend-config=backend-configs/secondary-aws.hclImportant
Workspace Path Workaround: The terraform_remote_state
data source does not automatically prepend the
env:/<workspace>/ prefix when reading another
workspace's S3 state. The code in locals.tf works
around this by manually constructing the full key path (e.g.,
env:/primary/env:/logscale-aws-eks) from the
workspace and config.key values in
primary_remote_state_config. This means the
secondary's primary_remote_state_config must specify
workspace = "primary" and the exact same key value used in
the primary's backend config.
Pre-Deployment Checklist
Run these verification commands before proceeding:
# 1. Verify AWS identity and permissions
aws sts get-caller-identity
# 2. Verify Terraform backend access
aws s3 ls s3://<terraform-backend-bucket>/
# 3. Verify Terraform version
terraform version
# 4. Initialize Terraform with backend config and verify workspaces
terraform init -backend-config=backend-configs/primary-aws.hcl
terraform workspace list
# Should show: primary, secondary
# 5. Verify kubectl contexts
kubectl config get-contexts | grep dr-
# 6. Verify Route53 hosted zone
aws route53 list-hosted-zones --query "HostedZones[?Name=='<your-zone>.']"