Install LogScale Operator on Kubernetes
The LogScale Operator creates a way to deploy and manage one or more LogScale Clusters, as well as configure LogScale components such as Repositories, Parsers and Ingest Tokens.
Running LogScale with the LogScale Operator reduces the overall cost of running LogScale on Kubernetes by running a LogScale-maintained controller that manages many of the cluster operations for you.
Operator Features
Automates the installation of a LogScale Cluster on Kubernetes
Automates the management of LogScale Repositories, Parsers, and Ingest Tokens
Automates the management of LogScale, such as partition balancing
Automates version upgrades of LogScale
Automates configuration changes of LogScale
Allows the use various storage mediums, including hostPath or storage class PVCs
Automates the cluster authentication and security such as pod-to-pod TLS, SAML and OAuth
Note
If you are looking for information about shipping data from a Kubernetes cluster to LogScale without running LogScale in Kubernetes, please see our Kubernetes Log Format documentation.
Installing LogScale using the LogScale Operator
The easiest way to install LogScale in Kubernetes is to use the offical LogScale Operator. Once the LogScale Operator is running, a number of LogScale components may then be created, including Humio Clusters, Repositories, Parsers and Ingest Tokens, see LogScale Operator Resource Management for more information.
Pre-Requisites
Operator Deployment Platform
A Kubernetes cluster that is version 1.19+
Kubernetes node(s) that meet the Instance Sizing
Cluster storage that can either use hostPath (recommended) or PVCs
Services
A running Kafka cluster with network access from Kubernetes nodes to both ZooKeeper and Kafka brokers
cert-manager v1.0+ (by default, but can be disabled with
certmanager
set tofalse
)NGINX Ingress Controller controller v0.34.1 (only required if configuring HumioCluster CR's with
ingress.controller
set tonginx
)
Installing the CRDs
Obtain the version from Releases. It is recommended that you always use the latest stable release.
$ export HUMIO_OPERATOR_VERSION=x.x.x
$ kubectl apply --server-side -f https://raw.githubusercontent.com/humio/humio-operator/humio-operator-${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humioclusters.yaml
$ kubectl apply --server-side -f https://raw.githubusercontent.com/humio/humio-operator/humio-operator-${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humioexternalclusters.yaml
$ kubectl apply --server-side -f https://raw.githubusercontent.com/humio/humio-operator/humio-operator-${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humioingesttokens.yaml
$ kubectl apply --server-side -f https://raw.githubusercontent.com/humio/humio-operator/humio-operator-${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humioparsers.yaml
$ kubectl apply --server-side -f https://raw.githubusercontent.com/humio/humio-operator/humio-operator-${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humiorepositories.yaml
$ kubectl apply --server-side -f https://raw.githubusercontent.com/humio/humio-operator/humio-operator-${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humioviews.yaml
$ kubectl apply --server-side -f https://raw.githubusercontent.com/humio/humio-operator/humio-operator-${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humioalerts.yaml
$ kubectl apply --server-side -f https://raw.githubusercontent.com/humio/humio-operator/humio-operator-${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humioaggregatealerts.yaml
$ kubectl apply --server-side -f https://raw.githubusercontent.com/humio/humio-operator/humio-operator-${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humiofilteralerts.yaml
$ kubectl apply --server-side -f https://raw.githubusercontent.com/humio/humio-operator/humio-operator-${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humioscheduledsearches.yaml
$ kubectl apply --server-side -f https://raw.githubusercontent.com/humio/humio-operator/humio-operator-${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humioactions.yaml
$ kubectl apply --server-side -f https://raw.githubusercontent.com/humio/humio-operator/humio-operator-${HUMIO_OPERATOR_VERSION}/config/crd/bases/core.humio.com_humiobootstraptokens.yaml
Note
It is possible to skip this step if not setting
--skip-crds
when installing the Helm Chart. This is
not recommended because uninstalling the helm chart will remove the
custom resources.
Installing the Operator Helm Chart
To install the chart with the release name
humio-operator
:
$ helm repo add humio-operator https://humio.github.io/humio-operator
For Helm v3 or higher:
$ helm install humio-operator humio-operator/humio-operator \
--namespace logging \
--create-namespace \
--version="${HUMIO_OPERATOR_VERSION}" \
--skip-crds
Note
By default, we expect cert-manager to be installed in order to configure TLS. If you do not have cert-manager installed, or if you know you do not want TLS, see the Configuration section for how to disable this.
The command deploys humio-operator on the Kubernetes cluster in the default configuration. The Configuration section lists the parameters that can be configured during installation.
Tip
List all releases using helm list
Creating a LogScale Cluster
A LogScale Cluster can be created once the LogScale Operator is running. Follow the instructions for creating a LogScale Cluster resource.
LogScale Operator Permissions
By default, Kubernetes
ServiceAccounts
are created for the each container in the LogScale pods. The
LogScale Operator will need the appropriate permissions to create
ClusterRole
, and
ClusterRoleBinding
resources,
as well as Role
and
RoleBinding
resources in the
namespace in which the
HumioCluster
is created.
This can be bypassed by creating the
ServiceAccounts
prior to
creating the HumioCluster
resource and then configuring the
HumioCluster
to use them. See
Custom
Service Accounts.
If this is done, both
operator.rbac.allowManageRoles
and
operator.rbac.allowManageClusterRoles
can be set to false.
Uninstalling the Operator Helm Chart
To uninstall/delete the
humio-operator
deployment:
$ helm delete humio-operator --namespace logging
The command removes all the Kubernetes components associated with the chart and deletes the release.
Configuration
The following table lists the configurable parameters of the ingress-nginx chart and their default values.
Parameter | Description | Default |
---|---|---|
operator.image.pullSecrets
| Image pull secrets to pull the operator container image. |
[]
|
operator.image.repository
| Operator container image repository. |
humio/humio-operator
|
operator.image.tag
| Operator container image tag. |
<latest release
tag>
|
operator.rbac.allowManageRoles
|
Configure RBAC resources to allow the humio-operator to manage
Role resources. Can be
disabled if all RBAC resources are created outside the
humio-operator.
|
true
|
operator.rbac.allowManageClusterRoles
|
Configure RBAC resources to allow the humio-operator to manage
ClusterRole resources.
Can be disabled if init container is disabled on all
HumioCluster resources,
or if all RBAC resources are created outside the
humio-operator.
|
true
|
operator.rbac.create
| Automatically create operator RBAC resources. |
true
|
operator.resources
| Operator resources requests and limits. |
{requests: {cpu: 250m, memory:
200Mi}, limits: {cpu: 250m, memory: 200Mi}}
|
operator.watchNamespaces
| List of namespaces the operator will watch for resources (if empty, it watches all namespaces). NB: If this is non-empty, it requires the use of Custom Service Accounts. |
[]
|
openshift
| Install additional RBAC resources specific to OpenShift. |
false
|
certmanager
| Whether cert-manager is present on the cluster, which will be used for TLS functionality. |
true
|
These parameters can be passed via Helm's --set
option.
For example:
$ helm install humio-operator humio-operator/humio-operator \
--namespace logging \
--create-namespace \
--version="${HUMIO_OPERATOR_VERSION}" \
--skip-crds