Upgrading LogScale Operator on Kubernetes
The upgrade procedure for the LogScale Operator depends on how the
Operator was installed. If the installation was performed using helm
without --skip-crds
, then follow the helm upgrade only.
Otherwise upgrade the CRDs first and then the helm chart.
The version of the Helm Chart should match that of the LogScale
Operator. For this reason, it is not recommended to change the
image.tag
of the LogScale Operator helm chart and
instead update the chart to the desired version.
Version Matrix
The following table matches Operator, LogScale and Kubernetes versions.
Table: Operator/LogScale/Kubernetes Version Matrix
Operator Version | LogScale Version | Kubernetes Version |
---|---|---|
0.2.0 | <= 1.18.x | <= 1.18 |
0.7.0 | <= 1.25.x | <= 1.18 |
0.10.1 | <= 1.26.x | <= 1.18 |
0.10.2 | <= 1.26.x | >= 1.19 and <= 1.25 |
0.14.1 | <= 1.37.x | >= 1.19 and <= 1.25 |
0.16.0 | >= 1.51.x and <=1.69.x | >= 1.19 and <= 1.25 |
0.17.0 | >= 1.70.x | >= 1.19 and <= 1.25 |
0.18.0 | >= 1.70.x | >= 1.19 and <= 1.25 |
0.19.0 | >= 1.70.x | >= 1.19 and <= 1.25 |
0.20.0 | >= 1.70.x | >= 1.19 and <= 1.25 |
0.20.1 | >= 1.100.x | >= 1.19 and <= 1.27 |
0.20.2 | >= 1.100.x | >= 1.21 and <= 1.27 |
0.20.3 | >= 1.100.x | >= 1.21 and <= 1.27 |
0.21.0 | >= 1.118.x | >= 1.21 and <= 1.29 |
0.22.0 | >= 1.118.x | >= 1.21 and <= 1.29 |
0.23.0 | >= 1.118.x | >= 1.21 and <= 1.29 |
0.24.0 | >= 1.118.x | >= 1.21 and <= 1.29 |
0.25.0 | >= 1.118.x | >= 1.21 and <= 1.29 |
0.26.0 | >= 1.118.x | >= 1.21 and <= 1.31 |
Upgrading the Custom Resource Definitions
Obtain the version from Releases.
$ export HUMIO_OPERATOR_VERSION=x.x.x
$ kubectl apply --server-side -f https://raw.githubusercontent.com/humio/humio-operator/humio-operator-${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humioclusters.yaml
$ kubectl apply --server-side -f https://raw.githubusercontent.com/humio/humio-operator/humio-operator-${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humioexternalclusters.yaml
$ kubectl apply --server-side -f https://raw.githubusercontent.com/humio/humio-operator/humio-operator-${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humioactions.yaml
$ kubectl apply --server-side -f https://raw.githubusercontent.com/humio/humio-operator/humio-operator-${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humioaggregatealerts.yaml
$ kubectl apply --server-side -f https://raw.githubusercontent.com/humio/humio-operator/humio-operator-${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humioalerts.yaml
$ kubectl apply --server-side -f https://raw.githubusercontent.com/humio/humio-operator/humio-operator-${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humiofilteralerts.yaml
$ kubectl apply --server-side -f https://raw.githubusercontent.com/humio/humio-operator/humio-operator-${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humioingesttokens.yaml
$ kubectl apply --server-side -f https://raw.githubusercontent.com/humio/humio-operator/humio-operator-${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humioparsers.yaml
$ kubectl apply --server-side -f https://raw.githubusercontent.com/humio/humio-operator/humio-operator-${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humiorepositories.yaml
$ kubectl apply --server-side -f https://raw.githubusercontent.com/humio/humio-operator/humio-operator-${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humioscheduledsearches.yaml
$ kubectl apply --server-side -f https://raw.githubusercontent.com/humio/humio-operator/humio-operator-${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humioviews.yaml
$ kubectl apply --server-side -f https://raw.githubusercontent.com/humio/humio-operator/humio-operator-${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humiobootstraptokens.yaml
If this is the first time upgrading using the
--server-side
flag there may be conflicts. If this is
the case, use the --force-conflicts
flag:
$ export HUMIO_OPERATOR_VERSION=x.x.x
$ kubectl apply --server-side --force-conflicts -f https://raw.githubusercontent.com/humio/humio-operator/humio-operator-${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humioclusters.yaml
$ kubectl apply --server-side --force-conflicts -f https://raw.githubusercontent.com/humio/humio-operator/humio-operator-${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humioexternalclusters.yaml
$ kubectl apply --server-side --force-conflicts -f https://raw.githubusercontent.com/humio/humio-operator/humio-operator-${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humioactions.yaml
$ kubectl apply --server-side --force-conflicts -f https://raw.githubusercontent.com/humio/humio-operator/humio-operator-${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humioaggregatealerts.yaml
$ kubectl apply --server-side --force-conflicts -f https://raw.githubusercontent.com/humio/humio-operator/humio-operator-${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humioalerts.yaml
$ kubectl apply --server-side --force-conflicts -f https://raw.githubusercontent.com/humio/humio-operator/humio-operator-${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humiofilteralerts.yaml
$ kubectl apply --server-side --force-conflicts -f https://raw.githubusercontent.com/humio/humio-operator/humio-operator-${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humioingesttokens.yaml
$ kubectl apply --server-side --force-conflicts -f https://raw.githubusercontent.com/humio/humio-operator/humio-operator-${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humioparsers.yaml
$ kubectl apply --server-side --force-conflicts -f https://raw.githubusercontent.com/humio/humio-operator/humio-operator-${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humiorepositories.yaml
$ kubectl apply --server-side --force-conflicts -f https://raw.githubusercontent.com/humio/humio-operator/humio-operator-${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humioscheduledsearches.yaml
$ kubectl apply --server-side --force-conflicts -f https://raw.githubusercontent.com/humio/humio-operator/humio-operator-${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humioviews.yaml
$ kubectl apply --server-side --force-conflicts -f https://raw.githubusercontent.com/humio/humio-operator/humio-operator-${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humiobootstraptokens.yaml
Note
It is possible to skip this step if not setting
--skip-crds
when installing the Helm Chart. This is
not recommended because uninstalling the helm chart will remove the
custom resources.
Helm
$ helm upgrade humio-operator humio-operator/humio-operator \
--namespace logging \
--version="${HUMIO_OPERATOR_VERSION}"
Operator Release Notes
Operator Version 0.26.0
Adds new options to update strategy as well as testing improvements. Removes unused `/tmp` volume. Note that upgrading to this version will cause all humio pods to restart.
Important highlights:
Add support for Kubernetes 1.30 and 1.31.
Add options `enableZoneAwareness` and `maxUnavailable` to update strategy. Defaults for `enableZoneAwareness` and `maxUnavailable` are `true` and `1`, respectively. With this feature, the zone is pinned when doing pod replacements for a given node pool to ensure all pods in a given zone is replaced before moving on to the next zone. The `maxUnavailable` option allows either an absolute number, or a percentage of the `nodeCount`.
Removed unused `/tmp` volume and volume mount.
Various improvements to testing.
Operator Version 0.25.0
Adds new HumioBootstrapToken type for the operator to authenticate to the cluster, requirement for non-empty resource names, testing improvements. Removes the `auth` sidecar container. Note that upgrading to this version will cause all humio pods to restart.
Important highlights:
Require non-empty names for resources such as Actions, Aggregate Alerts, Alerts, FilterAlerts, IngestTokens, Parsers, ScheduledSeaches, Views.
Includes support for the HumioBootstrapToken, which sets `BOOTSTRAP_ROOT_TOKEN_HASHED` on the humio pods which is then used by the operator for authentication to the cluster. This removes the need for the `auth` sidecar container as well as the auth serivce account. The `authServiceAccountName` field is now deprecated.
Various improvements to testing.
Operator Version 0.24.0
This release includes support for new LogScale functionality, removes unused code and reworks how the operator tests are performed.
Important highlights:
Added CRD's for aggregate alerts and scheduled searches.
Added `extraHostnames` field to TLS configuration to allow users to specify a list of additional hostnames they want to be appended in the TLS certificates.
Move operator API calls to LogScale to a new Kubernetes Service object. This new service object allows users to specify if pods should be included in the targets. Each node pool configuration now contains support for a field `nodePoolFeatures` includes a field `allowedAPIRequestTypes` that can be set to `[]` to exclude pods from that node pool from this service. The default value for `allowedAPIRequestTypes` is `[OperatorInternal]`.
Fix bug where the operator sometimes created new pods with the wrong pod revision annotation during pod replacements.
Ensure custom resources are requeued for reconcile after 15 seconds when no work is detected. This fixes a range of cases where objects are not properly reconciled periodically if e.g. external entities change. Previously a restart of the operator would kick off a new reconciliation, but with this change this is no longer needed.
Removed unused node ID label for pods and persistent volume claims.
Rework mock client used during tests and bump various dependencies during build.
Operator Version 0.23.0
This is a small release and includes support for Filter Alerts and support for fetching sensitive parts of actions from secrets.
Important highlights:
Add support for Filter Alerts.
Add support for fetching sensitive parts of actions from secrets.
Operator Version 0.22.0
This is a small release and mostly fixes a few smaller bugs, leverages the LogScale parser V2 API and a handful of changes to how the operator itself is built and tested.
Important highlights:
Assume latest version of LogScale cluster pods if image tag cannot be properly parsed.
Fix bug that caused parser update calls when a HumioParser custom resource did not contain any value for TagFields or TestData.
Fix bug where targetPort for Kubernetes Service object was not set to the container port of LogScale cluster pods.
Use the new LogScale parser V2 API when creating and updating parsers.
Operator Version 0.21.0
This introduces changes to the set of environment variables for pods and will cause cluster pod restarts to roll this out. The bump to minimum supported LogScale version includes cleaning up a bunch of legacy behavior. This introduces a new field named CommonEnvironmentVariables which is a common set of environment variables that all cluster pods should inherit.
Important highlights:
Bump minimum supported version to LogScale 1.118.0
Add test coverage for Kubernetes 1.28 and 1.29
Ignore NodeName and VolumeMount with prefix "kube-api-access-" when logging podSpecDiff
Add support for HumioCluster.Spec.CommonEnvironmentVariables. This provides a way of defining a common set of environment variables that all cluster pods will inherit. If a node pool explicitly sets the same environment variable, then the node pool specific value takes presedence over the specified common environment variables (credit: bderrly)
Other changes:
build: Upgrade to Go v1.22
build: Upgrade to controller-gen v0.14.0
build: Upgrade various smaller dependencies
test: Refactor test execution
Operator Version 0.20.3
Fixes a bug where the humio-operator will not properly run the upgrade tasks for a HumioCluster resource when that HumioCluster has been migrated to use only node pools, or when a node pool has been removed as part of an upgrade.
Ignore RunAsUserID and QueryOwnershipType when handling alerts.
Operator Version 0.20.2
Fixes a bug where the humio-operator may mistakenly see a LogScale repository as empty during LogScale pod restarts. It is now required to explicitly set AllowDataDeletion to true for a given HumioRepository object if the humio-operator is allowed to do actions that may delete data in a LogScale repository by either lowering retention settings or by deleting the repository entirely.
Adds identifier to log events from humio-operator to have a stable ID for Kubernetes objects it is reconciling.
Operator Version 0.20.1
There is a fix for updating connections for a HumioView resource.
The baked in logic for OpenShift SecurityContextContaints was removed, as it's been stale for a while and isn't being covered by the tests.
Small fix for helm chart when trying to define operator.nodeSelector in the values.yaml file.
Fix bug where we tried updating ServiceAccount annotations before the ServiceAccount was created.
Minimum logscale version was bumped to 1.100.0.
Fix bug where pods were restarted in a rolling fashion when updateStrategy was set to ReplaceAllOnUpdate and the change did not include a version upgrade.
Various dependency upgrades for fixing various deprecations. Most notably an issues which bumps the helper image, which will cause all pods to get restarted to update them to use the new helper image.
Operator Version 0.20.0
This release changes default node count to 0, stops creating
Kubernetes services for empty node pools, skips configuring
ZOOKEEPER_URL_FOR_NODE_UUID
for new LogScale
versions, adds support for role-permissions.json file and fixes a
bug where only the first environment variable source was used to
detect changes.
Change default nodeCount from 3 to 0. This means the user now has to explicitly set the desired nodeCount instead of relying on a default value.
Skip creating a Kubernetes service for node pools of size 0.
Bump version with automatic partition management on by default to 1.89.0, and skip automatically configuring ZooKeeper node UUID URL for new LogScale versions.
Add support for role-permissions.json file and mark use of view-group-permissions.json as deprecated. Official docs on the feature: Setting up Roles in a File
Include all environment variable sources when getting environment variable sources. Prior to this, changes to environment variable sources were only detected using the first entry found, causing no changes to be detected if there is multiple of them.
Bump humio/cli dependency to use new graphql library.
Operator Version 0.19.1
This release contains a fix when using the watchNamespaces flag when installing the operator using the operator helm chart.
Fixes an issue where permissions to HumioAlerts and HumioActions are not included when using the watchNamespaces flag when installing the operator using the operator helm chart.
Operator Version 0.19.0
This release removes automatic setting of
HUMIO_GC_OPTS
, disables deprecated calls to the
LogScale API, and adds support for custom PriorityClass on pods. Due
to the default changes to the HUMIO_GC_OPTS
environment variable, upgrades of the operator to this version will
cause humio pods to be restarted. Additionally, there were changes
to certificates for TLS-enabled clusters, which upon upgrade of the
operator will also cause restarts of the humio pods.
Remove automatic setting of
HUMIO_GC_OPTS
. Relies on the defaults set by LogScale. It's still possible to setHUMIO_GC_OPTS
via environmentVariables.Disable deprecated calls to the LogScale API.
Support for PriorityClass.
Operator Version 0.18.0
This release adds support for ThrottleField to HumioAlert types, adds support for TopologySpreadConstraints, updates various dependencies, adds validation for Kubernettes 1.25 and adds support for additional chart labels.
RollingUpdateBestEffort update strategy no longer dependent on stable or minor version differences.
Validation of support for Kubernetes 1.25.
Support for TopologySpreadConstraints.
Refactor of chart labels and allow additional common labels (credit: gawa).
Update several go module dependencies.
Operator Version 0.17.0
This release contains support for LogScale 1.70.0+, where fetching
UUIDs from ZooKeeper is now deprecated. With this operator release,
it's now possible to remove the
ZOOKEEPER_URL
environment
variable. Upgrade to this release if running LogScale 1.70.x+.
Support for removal of
ZOOKEEPER_URL
.No longer set
KAFKA_MANAGED_BY_HUMIO=true
as true is the default.Update several go module dependencies.
Operator Version 0.16.0
This release contains a number of fixes and updates, as well as a beta feature that allows for local PVCs. This release also bumps the default helper container image as well as changes the container names in the pod, which will cause cluster pods to be recreated when the operator is upgraded. This release requires LogScale version of 1.51.x or greater.
Faster replacement of pods during upgrades. Rather than creating pods incrementally during an upgrade, all pods are now created simultaneously.
Remove
NET_BIND_SERVICE
from the operator pod and make the filesystem readonly. Move runAsNonRoot from the operator pod level to the container level.Remove
NET_BIND_SERVICE
from the humio container. Requires LogScale 1.51.x+.Add
LogScalePersistentVolumeClaimPolicy
withReclaimType
ofOnNodeDelete
, which allows automatic cleanup of PVCs when using a local volume provisioner.Prefix sidecar container names with
humio-
.
Operator Version 0.15.0
This release contains a number of small fixes and updates.
Add minReadySeconds which sets the minimum time in seconds that a pod must be ready before the next pod can be deleted when doing rolling update.
Remove
--installCRDs
command from helm and use helm3 best practices using the--skip-crds
flag. Removes support for helm2.
Operator Version 0.14.2
This release contains a number of small fixes and updates.
Add support for pod annotations in the chart (credit: kmjayadeep)
Updates a number of dependencies
Fixes a bug where using
imageSource
may deploy an incorrect version of LogScaleUpdates the operator to use a scratch base image resulting in a smaller image size
Operator Version 0.14.1
This release updates the LogScale client so it no longer uses a deprecated API endpoint that is removed in LogScale 1.37.0. It is recommended to upgrade to this release prior to upgrading to LogScale 1.37.0.
Operator Version 0.14.0
This release introduces support for Node Pools, upgrade strategy options for LogScale upgrades, and adds a headless service for intra cluster communication. Deploying this release will cause all LogScale pods to be restarted simultaneously due to the migration to the headless service.
Important highlights:
Adds support for Node Pools, so different LogScale nodes can be split out by configuration. For example, to allow for ingest-only nodes
Adds support for automatically detecting processor count when no resource limits are set
Adds upgrade strategies that allow for rolling upgrades of LogScale
Adds feature to auto-detect the cores available to the LogScale container, so the resources for the pod may be omitted
Fixes issue where LogScale pods that have been evicted are not re-created
Fixes issue where LogScale pods that are pending can not updated
Fixes issue where the operator does not always retry when failing to create LogScale resources (HumioParser, HumioView, HumioRepository)
The
--server-side
flag is now required when runningkubectl apply
on the HumioCluster CRD due to its size
Operator Version 0.13.0
This release bumps the default helper container image, which will
cause cluster pods to be recreated when the operator gets upgraded
in order to leverage the new helper image tag. If recreation of pods
is undesired it is possible to lock the helper image by setting
helperImage
in the
HumioCluster
resource to the current version
before upgrading the operator. If the helper image tag gets locked,
we recommend removing this explicit helper image tag during the next
LogScale cluster upgrade.
Important highlights:
Fixes bug where a HumioView is updated on every reconcile even when it hasn't changed (credit: Crevil)
Fixes multiple bugs where the controller performs unnecessary reconciles resulting in high CPU usage
Operator Version 0.12.0
Important highlights:
Adds a startupProbe to the LogScale pods
Fixes issue where the livenessProbe and readinessProbe on the LogScale pods may fail and cause a cascading failure
Fixes issue where the operator may become stuck when the LogScale cluster does not respond to requests
Adds feature to specify secret references for certain fields of HumioAction resources (credit: Crevil)
Adds feature to pull the value of a LogScale image from a configmap
Adds feature to pull the value of LogScale pod's environment variables from a configmap or secret
Mounts the LogScale pod's tmp volume under the same container mount that is used for the humio-data directory (applies to LogScale versions 1.33.0+)
Fixes a number of conflicts where the operator attempts to update old versions of resources it manages
Updates cert manager api to use cert-manager.io/v1 instead of cert-manager.io/v1beta1
Operator Version 0.11.0
Important highlights:
Fixes a bug where pods may not be created as quickly as they should during an upgrade or restart of LogScale.
Improved logging
Operator Version 0.10.2
Version 0.10.2 of the operator no longer works for Kubernetes versions prior to 1.19. This is because the operator now uses the networking/v1 api which does not exist in Kubernetes 1.18 and older.
Important highlights:
Updates the default humio version to 1.28.0
Uses
networking/v1
instead of the deprecatednetworking/v1beta1
Fix bug around installing and validating license when running multiple HumioClusters
Operator Version 0.10.1
Version 0.10.0 was released with the default operator image tag version 0.9.1, while the intention was to use the default image tag of 0.10.0. This release fixes that so the new default image becomes 0.10.1 which includes all the fixes described in the notes for 0.10.0.
Operator Version 0.10.0
This release bumps the default helper container image, which will
cause cluster pods to be recreated when the operator gets upgraded
in order to leverage the new helper image tag. If recreation of pods
is undesired it is possible to lock the helper image by setting
helperImage
in the
HumioCluster
resource to the current version
before upgrading the operator. If the helper image tag gets locked,
we recommend removing this explicit helper image tag during the next
LogScale cluster upgrade.
Important highlights:
Operator now reuses HTTP connections when possible for communicating with the LogScale API
Sidecar now reuses HTTP connections when possible for communicating with the LogScale API
Operator Version 0.9.1
No changes, see release notes for version 0.9.0.
Operator Version 0.9.0
This release drops support for LogScale versions prior to LogScale
1.26.0 and speeds up cluster bootstrapping significantly. With this
release Bootstrapping
state
for HumioCluster
CRD's have
been removed entirely, so before upgrading to this release it is
important to make sure that no HumioCluster
resource is in Bootstrapping
state.
This release also bumps the default helper container image, which
will cause cluster pods to be recreated when the operator gets
upgraded in order to leverage the new helper image tag. If
recreation of pods is undesired it is possible to lock the helper
image by setting helperImage
in the HumioCluster
resource to the current
version before upgrading the operator. If the helper image tag gets
locked, we recommend removing this explicit helper image tag during
the next LogScale cluster upgrade.
Important highlights:
Drop support for LogScale versions prior to 1.26.0.
Drop the use of
Bootstrapping
state forHumioCluster
resources.Set more detailed release version, commit and date. This version information is logged out during container startup, and is also set as a custom
User-Agent
HTTP header for requests to the LogScale API.Switch operator container logs to RFC 3339 format with second precision. LogScale container logs are unaffected, as this only changes the logs from the operator container.
Bugfix liveness and readiness probes for
HumioCluster
CRD so it is now possible to set an empty probe. If an empty probe is used the operator will skip configuring the specific probe.Additional logging for
HumioExternalCluster
when the API token test fails. Previously it would silently fail and theHumioExternalCluster
would be stuck inUnknown
state.Bugfix where license update is triggered even if license was not changed.
Bugfix so LogScale storage and digest partition counts are correct when new clusters gets created. Previously clusters would create storage and digest partitions based on LogScale's built-in defaults rather than the user-defined values
storagePartitionsCount
anddigestPartitionsCount
in theHumioCluster
resource.
Operator Version 0.8.1
This release contains a fix for installing the LogScale license
during the Bootstrapping
state
for the HumioCluster
CRD.
Operator Version 0.8.0
This release adds support for LogScale 1.26.0 and newer. Upgrading to LogScale 1.26.0 is not supported with humio-operator versions prior to 0.8.0.
Important highlights:
License is now a required field on the
HumioCluster
resources. This must be present for both existing clusters and for bootstrapping new clusters.Default LogScale image tag version has been updated to 1.24.3.
Operator Version 0.7.0
This release contains small bugfixes, exposes LogScale liveness and readiness probes, and updates operator-sdk and supporting tooling.
Important highlights:
Fixes bug where the operator will try to clean up CA Issuer even when not using cert-manager, resulting in logged warnings;
Allows overriding of LogScale liveness and readiness probes;
Fixes a bug where the HumioCluster may get stuck in a ConfigError state even when the cluster is healthy; and
Fixes bug where the operator may panic when the LogScale pods are down.
Operator Version 0.6.1
This release fixed a bug where the RBAC rules in the Helm chart have not been updated to include the new CRDs introduced in version 0.6.0.
Operator Version 0.6.0
This release contains new
HumioAlert
and
HumioAction
custom resources.
This means these new CRDs must be applied before the upgrade,
although it's recommended to apply CRDs during every upgrade.
Important highlights:
Adds LogScale Alerts and Actions support.
Adds the ability to the lookup hostname from a secret.
Operator Version 0.5.1
This release fixes a bug where ingress resources may still be
created when spec.hostname
and
spec.esHostname
are not set.
Operator Version 0.5.0
Important highlights:
Upgrading to this release will replace the current
HumioCluster
pods.The default json log format for LogScale has changed if running LogScale version 1.20.1 or later. See LogScale Internal Logging.
The default LogScale version has been updated to 1.20.1.
Operator Version 0.4.0
Important highlights:
Upgrading to this release will replace the current
HumioCluster
pods.Fix for bug where UUIDs are not assigned properly when not using
USING_EPHEMERAL_DISKS=true
. See below for additional information.Adds support for managing LogScale licenses.
Requires explicitly defined storage. See below for additional information.
Additional information:
It is now required to explicitly define the storage configuration. This is because until now, the default has been
emptyDir
, which will result in loss of data if not also using bucket storage. If relying on the default storage configuration, it is now required to set eitherspec.dataVolumeSource
orspec.dataVolumePersistentVolumeClaimSpecTemplate
. It is necessary to use either a persistent storage medium or bucket storage to avoid data loss. See the example resources section on how to configure ephemeral or persistent storage.Symptoms of the fixed UUID bug when not using
USING_EPHEMERAL_DISKS=true
include the appearance of missing nodes and nodes with no partitions assigned in the Cluster Administration page in the LogScale UI.Fix for bug where partitions may not be auto balanced by the operator
Fix to rolling restart logic to ensure that pods are only restarted one at a time
Updates to various operator-managed resources so they now include the
ConfigError
stateFix bug where restart or update may fail if an existing pod is not in a Running state
Change default humio version to 1.18.1
Allow for additional labels for ingest token secrets
Operator Version 0.3.0
Important highlights:
Upgrading to this release will replace the current
HumioCluster
pods.Add support for LogScale 1.19.0. LogScale 1.19.0 introduces some changes to how logging is performed which is not taken into account for humio-operator versions prior to this release.
Additional information:
New field added to
HumioCluster
CRD:helperImage
This field makes it possible to override the default container image used for the helper image. This is useful in scenarios where images should be pulled from a local container image registry.
New field added to
HumioCluster
CRD:disableInitContainer
The init container is used to extract information about the availability zone from the Kubernetes worker node.
If enabled, the auto partition rebalancing will use this to assign digest and storage partitions with availability zones in mind.
When running in a single availability zone setup, it could make sense to disable the use of the init container to tighten up the permissions needed to run the pods of a
HumioCluster
.New field added to
HumioCluster
CRD:terminationGracePeriodSeconds
Previously pods were created without an explicit termination grace period for pods. This meant that pods inherit the Kubernetes default behaviour which is 30 seconds. In general LogScale should be able to gracefully terminate by itself, and when running with bucket storage and ephemeral nodes the termination should allow time for the LogScale node to upload data to bucket storage. The new default value is 300 seconds, but can be overridden by using this field.
Bump default LogScale version to 1.18.0
If the
Image
property on theHumioCluster
is left out, this means that the cluster will get upgraded. Make sure to read the Humio Server 1.18.0 LTS (2020-11-26) to confirm this migration is safe to do.Leverage new suggested partition layouts
With LogScale 1.17.0+ we will now rely on LogScale to suggest partition layouts for both digest and storage partitions. The benefit of doing this is that the suggested partition layouts will take into account what availability zone the LogScale cluster nodes are located in.
Operator Version 0.2.0
There is a new HumioView
custom resource. This means the
HumioView
CRD must be applied
before the upgrade (though it is recommended to apply CRDs during
every upgrade). There are a number of new features and bug fixes in
this release, which are described in the
Release
Notes.
Operator Version 0.1.2
This release fixes a bug where LogScale nodes using persistent
storage may receive a
NodeExists
error when starting
up. This applies to LogScale clusters using persistent storage, and
not clusters using ephemeral disks and bucket storage.
If your cluster is using persistent storage (for example,
Persistent
Volume Claims), it is important to either omit the
environment variable USING_EPHEMERAL_DISKS
or set it
to false
.
If your cluster is using ephemeral disks and bucket storage, it is
important to set the environment variable
USING_EPHEMERAL_DISKS
to
true
. This
setting is included in the
example
resources.
This version also upgrades the helper image which is used as the
init container and sidecar container for pods tied to a
HumioCluster
resource. This will be treated
as an upgrade procedure, so all pods will be replaced.
Operator Version 0.1.1
No changes required.
Operator Version 0.1.0
No changes required, but it is important to note this version
upgrades the helper image which is used as the init container and
sidecar container for pods tied to a
HumioCluster
resource. This will be treated
as an upgrade procedure, so all pods will be replaced.
Operator Version 0.0.14
Version 0.0.14 of the LogScale Operator contains changes related to how Node UUIDs are set. This fixes an issue where pods may lose their node identity and show as red/missing in the LogScale Cluster Administration page under Cluster Nodes when they are scheduled in different availability zones.
When upgrading to version 0.0.14 of the LogScale Operator, it is
necessary to add the following to the
HumioCluster
spec to maintain
compatibility with previous versions of how the Operator set UUID
prefixes:
spec:
nodeUUIDPrefix: "humio_{{.Zone}}_"
This change must be completed in the following order:
Shut down the LogScale Operator by deleting it. This can be done by running:
kubectl delete deployment humio-operator -n humio-operator
Make the above node UUID change to the
HumioCluster
specUpgrade the LogScale Operator
If creating a fresh LogScale cluster, the
nodeUUIDPrefix
field should be
left unset.
Migration to the new node UUID Prefix
The simplest way to migrate to the new UUID prefix is by starting
with a fresh HumioCluster
.
Otherwise, the effect of this change depends on how the
HumioCluster
is configured.
If using S3 with ephemeral disks, humio nodes will lose their
identity when scheduled to new nodes with fresh storage if this
change is not made. If you'd like to migrate to the new node UUID
prefix, ensure autoRebalancePartitions:
false
and then perform the upgrade. In the LogScale
Cluster Administration page under Cluster Nodes, you will notice
that old nodes show as red/missing and the new nodes do not have
partitions. It is necessary to migrate the storage and digest
partitions from the old nodes to the new nodes and then remove the
old nodes. You may need to terminate the instance which contains the
LogScale data one at a time so they generate new UUIDs. Ensure the
partitions are migrated before terminating the next instance. Once
all old nodes are removed, autoRebalancePartitions can be set back
to true if desired.
If using PVCs, it is not strictly nescessary to adjust the
nodeUUIDPrefix
field as the
node UUID is stored in the PVC. If the PVC is bound to a zone (such
as with AWS), then this is not an issue. If the PVC is not bound to
a zone, then you may still have the issue where nodes lose their pod
identity when scheduled in different availability zones. If this is
the case, nodes must be manually removed from the LogScale Cluster
Administration page under Cluster Nodes, while taking care to first
migrate storage and digest partitions away from the node prior to
removing it from the cluster.
Operator Version 0.0.13
There are no special tasks required during this upgrade, however, it
is worth noting that the operator-sdk version was changed in version
0.0.13 so it is important that the helm version matches the operator
version otherwise the LogScale pods will fail to start due to a
missing /manager
entrypoint.
Operator Version 0.0.12
The selector labels changed in version 0.0.12, so for this reason it
is necessary to delete the
humio-operator
deployment
prior to upgrading the helm chart. The upgrade steps are:
Delete the
humio-operator
deployment by running:kubectl delete deployment humio-operator -n humio-operator
Run the
helm upgrade
command as documented above
If the humio-operator
deployment is not removed before the upgrade, the upgrade will fail
with:
Error: UPGRADE FAILED: cannot patch "humio-operator" with kind Deployment: Deployment.apps "humio-operator" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"humio-operator", "app.kubernetes.io/instance":"humio-operator", "app.kubernetes.io/name":"humio-operator"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
Pre-0.0.12
No special changes are necessary when upgrading the LogScale Operator between versions 0.0.0-0.0.11.