Maintaining and Changing the Implementation

This section of the documentation covers maintaining, upgrading, and scaling the reference architecture implementation.

Upgrading LogScale

You can upgrade logscale by setting the logscale_image_version in your TFVAR_FILE to the desired target version: logscale_image_version = "1.217.0"

Apply the update:

shell
terraform apply -target module.logscale -var-file $TFVAR_FILE

This updates the Kubernetes manifest that defines the LogScale cluster, triggering Humio Operator to upgrade the cluster appropriately.

Scaling the Architecture

All scaling operations should be done during maintenance windows. Keeping in mind that when changing pod resourcing, some PVCs will not get replaced.

For example, if a Kafka node has a persistent claim of 1TB and the new size calls for 2TB - the 1TB PVC will not be replaced without manual intervention.

Kafka Node Pod Scaling

The current Strimzi module will build nodes in two groups:

  1. Controller/Broker nodes

  2. Broker only nodes

Depending on the number of nodes selected for an architecture, you will have 3, 5, or 7 nodes that will act as a controller for the environment. This is determined by the following locals in Terraform:

terraform
# Convert the given number of broker pods into a controller/broker split and account for the smallest
# architecture of 3 nodes
locals {
  possible_controller_counts = [for c in [3,5,7] : c if c < var.kafka_broker_pod_replica_count]
  controller_count = var.kafka_broker_pod_replica_count <= 3 ? 3 : max(local.possible_controller_counts...)
  broker_count = var.kafka_broker_pod_replica_count <= 3 ? 0 : var.kafka_broker_pod_replica_count - local.controller_count
 
  kubernetes_namespace = "${var.k8s_namespace_prefix}"
}

LogScale Kubernetes Pod Scaling

LogScale pods in this architecture are designed to have a one-to-one relationship with the underlying Kubernetes nodes. However, the Humio Operator does not currently support autoscaling operations while the underlying Kubernetes nodes do. In most cluster sizes, the desired count of pods is less than the maximum number of Kubernetes nodes for that tier. For example, in a small:advanced architecture, the desired digest pod count is 6 while the maximum EKS digest node count is 12. In this situation, you can expand kubernetes by updating the node count in cluster_size.tpl to a new target count that should not exceed the maximum EKS node count.

After saving cluster_size.tpl, run:

shell
terraform apply -target module.logscale -var-file $TFVAR_FILE

Important

Moving from larger architectures to smaller, for example from advanced to basic, is not recommended.