Scaling a HumioCluster Up or Down

Scaling up a HumioCluster

To scale up a HumioCluster, increase the nodeCount value of the HumioCluster node pool, and the humio-operator will create the additional pods.

Scaling down a HumioCluster

At the time of writing, scaling down a HumioCluster is more involved as there's no built in support for carrying this out. Lowering nodeCount value by itself will not immediately start evicting/removing pods.

Overall there's a few different strategies depending on the type/role of node/pod we want to remove.

  1. Lower nodeCount. For simplicity's sake, keep all other configs/versions unchanged during scale down. We assume only one node pool is being scaled down at a time, so repeat this entire process for each node pool that needs to be scaled down.

  2. Go to LogScale cluster administration UI, under Cluster nodes and click the button for the Mark for eviction action on the nodes we want to remove.

    1. Remember to include zone information in the considerations around picking which nodes to mark for eviction.

  3. Wait until node is done with eviction.

  4. If we want to remove a pod that does not store segments: This is nodes that are neither a storage node, nor digest node. This is typically nodes that primarily serve API calls, UI components, query coordination, ingest and such.

    1. Manually use kubectl delete pod to delete the pods for the nodes we marked for eviction.

      1. After the pods are gone, we have a couple of choices depending on the LogScale version:

        1. LogScale 1.82+:

          1. Either: Wait a couple of hours and LogScale will automatically remove dead nodes from the cluster.

          2. OR: Go to the LogScale cluster administration UI, under Cluster nodes and click the button for the Remove node action.

        2. LogScale <1.82:

          1. Go to the LogScale cluster administration UI, under Cluster nodes and click the button for the Remove node action.

  5. If we want to remove a pod that stores segments: This would be nodes doing storage or digest.

    1. Manually use kubectl delete pod to delete the pods for the nodes we marked for eviction.

    2. When the pod is gone, and cluster sees the node as down: Go to the LogScale cluster administration UI, under Cluster nodes and click the button for the Remove node action.