Humio Operator 0.0.14 GA (2020-10-14)
Version? | Type? | Release Date? | Config.Changes? |
---|---|---|---|
0.0.14 | GA | 2020-10-14 |
Version 0.0.14 of the Humio Operator contains changes related to how Node UUIDs are set. This fixes an issue where pods may lose their node identity and show as red/missing in the LogScale Cluster Administration page under Cluster Nodes when they are scheduled in different availability zones.
When upgrading to version 0.0.14 of the Humio Operator, it is
necessary to add the following to the
HumioCluster
spec to
maintain compatibility with previous versions of how the
Operator set UUID prefixes:
spec:
nodeUUIDPrefix: "humio_{{.Zone}}_"
This change must be completed in the following order:
Shut down the Humio Operator by deleting it. This can be done by running:
kubectl delete deployment humio-operator -n humio-operator
Make the above node UUID change to the
HumioCluster
specUpgrade the Humio Operator
If creating a fresh LogScale cluster, the
nodeUUIDPrefix
field
should be left unset.
The simplest way to migrate to the new UUID prefix is by
starting with a fresh
HumioCluster
. Otherwise,
the effect of this change depends on how the
HumioCluster
is
configured.
If using S3 with ephemeral disks, humio nodes will lose their
identity when scheduled to new nodes with fresh storage if this
change is not made. If you'd like to migrate to the new node
UUID prefix, ensure
autoRebalancePartitions:
false
and then perform the upgrade. In the
LogScale Cluster Administration page under Cluster
Nodes, you will notice that old nodes show as red/missing and
the new nodes do not have partitions. It is necessary to migrate
the storage and digest partitions from the old nodes to the new
nodes and then remove the old nodes. You may need to terminate
the instance which contains the LogScale data one at a
time so they generate new UUIDs. Ensure the partitions are
migrated before terminating the next instance. Once all old
nodes are removed, autoRebalancePartitions can be set back to
true if desired.
If using PVCs, it is not strictly nescessary to adjust the
nodeUUIDPrefix
field as
the node UUID is stored in the PVC. If the PVC is bound to a
zone (such as with AWS), then this is not an issue. If the PVC
is not bound to a zone, then you may still have the issue where
nodes lose their pod identity when scheduled in different
availability zones. If this is the case, nodes must be manually
removed from the LogScale Cluster Administration page
under Cluster Nodes, while taking care to first migrate storage
and digest partitions away from the node prior to removing it
from the cluster.