Cluster Sizing
The infrastructure supports predefined cluster sizes optimized for different workloads:
| Size | Node Count | Use Case | Compute Types |
|---|---|---|---|
| xsmall | ~13 nodes | Development/Testing | E5.Flex, DenseIO.E5 (3 digest nodes) |
| small | ~23 nodes | Small Production | E5.Flex, DenseIO2.16 (6 digest nodes) |
| medium | ~52 nodes | Medium Production | E4.Flex, DenseIO2.16 (21 digest nodes) |
| large | ~78 nodes | Large Production | E4.Flex, DenseIO2.24 (42 digest nodes) |
| xlarge | ~138 nodes | Enterprise | E4.Flex, DenseIO2.24 (78 digest nodes) |
Each size configures specialized node pools (infrastructure only - applications deployed separately):
System: Node pool for Kubernetes system components (CoreDNS, etc.)
Digest: Node pool for LogScale data processing with NVMe storage
Ingest: Node pool for LogScale data ingestion
UI: Node pool for LogScale web interface
Ingress: Node pool for traffic proxy/load balancing
Strimzi: Node pool for Kafka brokers and controllers (optional)
Note
These are compute node pools with appropriate labels and taints. The actual application
deployments (Falcon LogScale Collector, Kafka, ingress controllers) are installed by
module.logscale during deployment.
DR Standby Mode: When dr = "standby", the UI
and Ingest node pools are automatically disabled (scaled to zero). Only System, Digest,
Strimzi, and Ingress pools remain active. This reduces cost while maintaining the ability to
quickly bring the standby cluster online during failover. The Ingress pool stays enabled to
support load balancer health checks.