
Humio Operator Resource Management
The Humio Operator manages a number of LogScale components such as Repositories, Parsers and Ingest Tokens.
After installing the Operator by following the
Operator
Installation Guide, a LogScale Cluster resource (referred to
as HumioCluster) can be created along
with a number of other resource types.
Creating the Resource
Any of the resources can be created by applying the yaml via
kubectl. First, create a
resource.yaml file with the
desired content, and then run:
$ kubectl create -f ./resource.yamlThe content of the yaml file may contain any number of resources. The full list of resources and examples are below.
HumioCluster
A HumioCluster resource tells
the LogScale Operator to create a LogScale Cluster.
Multiple creations of
HumioCluster may be done and
managed by the Operator.
The content of the yaml file will depend on how the LogScale Cluster should be configured to run. The next parts of this document explain some common cluster configurations.
Ephemeral with S3 Storage
A highly recommended LogScale Cluster configuration is to run in ephemeral mode, using S3 for persistent storage. The LogScale pods will be configured with hostPath, which mounts a directory from the host machine to the pod which is local storage, and ideally made up of NVME SSDs. This configuration also has fairly high resource limits and affinity policies that ensure no two LogScale pods may be scheduled on the same host. This is an ideal storage configuration for production workloads running in AWS.
apiVersion: core.humio.com/v1alpha1
kind: HumioCluster
metadata:
name: example-humiocluster
spec:
image: "humio/humio-core:1.36.0"
targetReplicationFactor: 2
storagePartitionsCount: 24
digestPartitionsCount: 24
resources:
limits:
cpu: "8"
memory: 56Gi
requests:
cpu: "6"
memory: 52Gi
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: humio_node_type
operator: In
values:
- core
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- humio
topologyKey: kubernetes.io/hostname
dataVolumeSource:
hostPath:
path: "/mnt/disks/vol1"
type: "Directory"
environmentVariables:
- name: S3_STORAGE_BUCKET
value: "my-cluster-storage"
- name: S3_STORAGE_REGION
value: "us-west-2"
- name: S3_STORAGE_ENCRYPTION_KEY
value: "my-encryption-key"
- name: USING_EPHEMERAL_DISKS
value: "true"
- name: S3_STORAGE_PREFERRED_COPY_SOURCE
value: "true"
- name: "ZOOKEEPER_URL"
value: "z-2-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:2181,z-3-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:2181,z-1-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:2181"
- name: "KAFKA_SERVERS"
value: "b-2-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:9092,b-1-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:9092,b-3-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:9092"Ephemeral with GCS Storage
A highly recommended LogScale Cluster configuration is to run in ephemeral mode, using GCS for persistent storage. The LogScale pods will be configured with hostPath, which mounts a directory from the host machine to the pod which is local storage, and ideally made up of NVME SSDs. This configuration also has fairly high resource limits and affinity policies that ensure no two LogScale pods may be scheduled on the same host. This is an ideal storage configuration production workloads running in GCP.
apiVersion: core.humio.com/v1alpha1
kind: HumioCluster
metadata:
name: example-humiocluster
spec:
image: humio-core:dynxml_current_release
targetReplicationFactor: 2
storagePartitionsCount: 24
digestPartitionsCount: 24
resources:
limits:
cpu: "8"
memory: 56Gi
requests:
cpu: "6"
memory: 52Gi
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: humio_node_type
operator: In
values:
- core
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- humio-core
topologyKey: kubernetes.io/hostname
dataVolumeSource:
hostPath:
path: "/mnt/disks/vol1"
type: "Directory"
extraHumioVolumeMounts:
- name: gcp-storage-account-json-file
mountPath: /var/lib/humio/gcp-storage-account-json-file
subPath: gcp-storage-account-json-file
readOnly: true
extraVolumes:
- name: gcp-storage-account-json-file
secret:
secretName: gcp-storage-account-json-file
environmentVariables:
- name: GCP_STORAGE_ACCOUNT_JSON_FILE
value: "/var/lib/humio/gcp-storage-account-json-file"
- name: GCP_STORAGE_BUCKET
value: "my-cluster-storage"
- name: GCP_STORAGE_ENCRYPTION_KEY
value: "my-encryption-key"
- name: USING_EPHEMERAL_DISKS
value: "true"
- name: "ZOOKEEPER_URL"
value: "z-2-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:2181,z-3-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:2181,z-1-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:2181"
- name: "KAFKA_SERVERS"
value: "b-2-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:9092,b-1-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:9092,b-3-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:9092"Nginx Ingress with Cert Manager
Configuring Ingress with Cert Manager will ensure you have an Ingress resource that can be used to access the cluster, along with a valid cert provided by Cert Manager.
Note
Ingress is currently not supported for Node Pools.
apiVersion: core.humio.com/v1alpha1
kind: HumioCluster
metadata:
name: example-humiocluster
spec:
image: "humio/humio-core:1.36.0"
environmentVariables:
- name: "ZOOKEEPER_URL"
value: "humio-cp-zookeeper-0.humio-cp-zookeeper-headless:2181"
- name: "KAFKA_SERVERS"
value: "humio-cp-kafka-0.humio-cp-kafka-headless:9092"
hostname: "humio.example.com"
esHostname: "humio-es.example.com"
ingress:
enabled: true
controller: nginx
annotations:
use-http01-solver: "true"
cert-manager.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginxNginx Ingress with Custom Path
If the case where you want to run LogScale under a custom path.
Note
Ingress is currently not supported for Node Pools.
apiVersion: core.humio.com/v1alpha1
kind: HumioCluster
metadata:
name: example-humiocluster
spec:
image: "humio/humio-core:1.36.0"
environmentVariables:
- name: "ZOOKEEPER_URL"
value: "humio-cp-zookeeper-0.humio-cp-zookeeper-headless:2181"
- name: "KAFKA_SERVERS"
value: "humio-cp-kafka-0.humio-cp-kafka-headless:9092"
hostname: "humio.example.com"
esHostname: "humio-es.example.com"
path: /logs
ingress:
enabled: true
controller: nginxHumioCluster with Hostname References
In the case where spec.hostname
and/or spec.esHostname cannot be
managed in the HumioCluster resource, it's possible to use a reference
to an external source for either. Currently
secretKeyRef is supported.
To use secretKeyRef containing the
hostname, create the secret as
shown below, where
<hostname> is the hostname
for the HumioCluster:
$ kubectl create secret generic <cluster-name>-hostname --from-literal=data=<hostname> -n <namespace>You would then update the HumioCluster resource to use the hostname reference:
spec:
hostnameSource:
secretKeyRef:
name: <cluster-name>-hostname
key: data
To use ``secretKeyRef`` containing the ``esHostname``, create the secret as shown below, where ``<es-hostname>`` is the es-hostname for the HumioCluster:$ kubectl create secret generic <cluster-name>-es-hostname --from-literal=data=<es-hostname> -n <namespace>You would then update the HumioCluster resource to use the esHostname reference:
spec:
esHostnameSource:
secretKeyRef:
name: <cluster-name>-es-hostname
key: dataPersistent Volumes
It's possible to use Persistent Volumes as the backing store for LogScale data. This can be used as an alternative to Bucket Storage, however, Persistent Volumes using network block storage are significantly slower than local disks and LogScale will not perform well on this medium. For using Persistent Volumes using local storage, see Local Persistent Volumes (Beta).
apiVersion: core.humio.com/v1alpha1
kind: HumioCluster
metadata:
name: example-humiocluster
spec:
image: "humio/humio-core:1.36.0"
license:
secretKeyRef:
# Secret must be created with the following command: kubectl create secret generic example-humiocluster-license --from-literal=data=<license>
name: example-humiocluster-license
key: data
targetReplicationFactor: 2
storagePartitionsCount: 24
digestPartitionsCount: 24
resources:
limits:
cpu: "8"
memory: 56Gi
requests:
cpu: "6"
memory: 52Gi
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: humio_node_type
operator: In
values:
- core
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- humio-core
topologyKey: kubernetes.io/hostname
dataVolumePersistentVolumeClaimSpecTemplate:
storageClassName: standard
accessModes: [ReadWriteOnce]
resources:
requests:
storage: 500Gi
environmentVariables:
- name: "ZOOKEEPER_URL"
value: "z-2-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:2181,z-3-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:2181,z-1-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:2181"
- name: "KAFKA_SERVERS"
value: "b-2-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:9092,b-1-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:9092,b-3-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:9092"Local Persistent Volumes (Beta)
It's possible to use local Persistent Volumes as the backing store for
LogScale data, using something like
Local
Persistent Volumes. This can be used in combination with Bucket
Storage, as the operator can be configured to clean up local Persistent
Volumes that are attached to a node when that node is removed from the
Kubernetes cluster. For this reason, it is extremely important to use
USING_EPHEMERAL_DISKS=true along with Bucket Storage when
using this option. This cleanup setting is enabled in the following
example by setting
dataVolumePersistentVolumeClaimPolicy.reclaimType=OnNodeDelete.
apiVersion: core.humio.com/v1alpha1
kind: HumioCluster
metadata:
name: example-humiocluster
spec:
image: "humio/humio-core:1.36.0"
license:
secretKeyRef:
# Secret must be created with the following command: kubectl create secret generic example-humiocluster-license --from-literal=data=<license>
name: example-humiocluster-license
key: data
targetReplicationFactor: 2
storagePartitionsCount: 24
digestPartitionsCount: 24
resources:
limits:
cpu: "8"
memory: 56Gi
requests:
cpu: "6"
memory: 52Gi
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: humio_node_type
operator: In
values:
- core
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- humio-core
topologyKey: kubernetes.io/hostname
dataVolumePersistentVolumeClaimSpecTemplate:
storageClassName: local-storage
accessModes: [ReadWriteOnce]
resources:
requests:
storage: 500Gi
dataVolumePersistentVolumeClaimPolicy:
reclaimType: OnNodeDelete
environmentVariables:
- name: S3_STORAGE_BUCKET
value: "my-cluster-storage"
- name: S3_STORAGE_REGION
value: "us-west-2"
- name: S3_STORAGE_ENCRYPTION_KEY
value: "my-encryption-key"
- name: USING_EPHEMERAL_DISKS
value: "true"
- name: S3_STORAGE_PREFERRED_COPY_SOURCE
value: "true"
- name: "ZOOKEEPER_URL"
value: "z-2-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:2181,z-3-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:2181,z-1-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:2181"
- name: "KAFKA_SERVERS"
value: "b-2-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:9092,b-1-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:9092,b-3-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:9092"Node Pools (Beta)
Multiple groups of LogScale nodes may be run as part of the HumioCluster. An example of this may include a node pool for ingest-only nodes and a node pool for digest and storage nodes.
apiVersion: core.humio.com/v1alpha1
kind: HumioCluster
metadata:
name: example-humiocluster
spec:
targetReplicationFactor: 2
storagePartitionsCount: 24
digestPartitionsCount: 24
license:
secretKeyRef:
name: example-humiocluster-license
key: data
nodePools:
- name: digest-storage
spec:
image: "humio/humio-core:1.36.0"
nodeCount: 3
dataVolumeSource:
hostPath:
path: "/mnt/disks/vol1"
type: "Directory"
resources:
limits:
cpu: "8"
memory: 56Gi
requests:
cpu: "6"
memory: 52Gi
environmentVariables:
- name: "ZOOKEEPER_URL"
value: "z-2-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:2181,z-3-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:2181,z-1-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:2181"
- name: "KAFKA_SERVERS"
value: "b-2-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:9092,b-1-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:9092,b-3-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:9092"
- name: ingest-only
spec:
image: "humio/humio-core:1.36.0"
nodeCount: 3
dataVolumePersistentVolumeClaimSpecTemplate:
storageClassName: standard
accessModes: [ReadWriteOnce]
resources:
requests:
storage: 10Gi
resources:
limits:
cpu: "8"
memory: 56Gi
requests:
cpu: "6"
memory: 52Gi
environmentVariables:
- name: NODE_ROLES
value: "httponly"
- name: "ZOOKEEPER_URL"
value: "z-2-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:2181,z-3-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:2181,z-1-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:2181"
- name: "KAFKA_SERVERS"
value: "b-2-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:9092,b-1-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:9092,b-3-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:9092"Configuring TLS with Operator
By default, TLS is enabled on each LogScale pod. This is recommended, however, in some cases you may want TLS to be disabled. To do this, use the below configuration.
If TLS is enabled here, it is assumed that TLS is also used for the connection to Kafka. If TLS on the LogScale pods is disabled but the connection to Kafka should use SSL, then Kafka will need to be configured explicitly to use SSL.
spec:
tls:
enabled: falseAdditional Kafka Configuration with Operator
Extra Kafka configs can be set and used by the LogScale pods. This is mainly used to toggle TLS when communicating with Kafka. To enable TLS for example, set the configuration below.
SSL is enabled by default when using TLS for the LogScale pods. See Configuring TLS with Operator.
spec:
extraKafkaConfigs: "security.protocol=SSL"ZooKeeper Deployment in Kubernetes and Operator
When TLS is enabled for LogScale, TLS is by default also enabled
for connections to ZooKeeper. In some cases, such as with
MSK, TLS will be enabled
for the Kafka brokers but not for ZooKeeper. To disable TLS for ZooKeeper,
set the following in values for the
HUMIO_OPTS environment variable:
-Dzookeeper.client.secure=false.
Authentication - SAML with Operator
When using SAML, it's necessary to follow the Configuration and Authentication with SAML and once the IDP certificate is obtained, you must create a secret containing that certificate using kubectl.
$ kubectl create secret generic <cluster-name>-idp-certificate --from-file=idp-certificate.pem=./my-idp-certificate.pem -n <namespace>Once the secret has been created, a configuration similar to below can be added to enable SAML, adjusting for your cluster URL and IDP token.
spec:
environmentVariables:
- name: AUTHENTICATION_METHOD
value: saml
- name: AUTO_CREATE_USER_ON_SUCCESSFUL_LOGIN
value: "true"
- name: PUBLIC_URL
value: https://my-cluster.example.com
- name: SAML_IDP_SIGN_ON_URL
value: https://accounts.google.com/o/saml2/idp?idpid=idptoken
- name: SAML_IDP_ENTITY_ID
value: https://accounts.google.com/o/saml2/idp?idpid=idptokenAuthentication - Single User
If running LogScale in single user mode, you will need to set a
password for the user user. This can
be done via a plain text environment variable or using a Kubernetes secret
that is referenced by an environment variable. If supplying a secret, you
must populate this secret prior to creating the
HumioCluster resource otherwise the
pods will fail to start.
By setting a password using an environment variable plain text value:
spec:
environmentVariables:
- name: "SINGLE_USER_PASSWORD"
value: "MyVeryS3cretPassword"By setting a password using an environment variable secret reference:
spec:
environmentVariables:
- name: "SINGLE_USER_PASSWORD"
valueFrom:
secretKeyRef:
name: developer-user-password
key: passwordLicense Management with Operator
LogScale licenses can be managed with the operator. In order to do
so, a Kubernetes secret must be created which contains the value of the
license. First create the secret as shown below, where
<license> is the license
content that is obtained from LogScale:
$ kubectl create secret generic <cluster-name>-license --from-literal=data=<license> -n <namespace>And then update the HumioCluster resource to use the secret reference:
spec:
license:
secretKeyRef:
name: <cluster-name>-license
key: dataUpdate Strategy
HumioCluster resources may be configured with an Update Strategy. The
updateStrategy.type controls how
the operator restarts LogScale pods in response to a image change
to the HumioCluster spec or
nodePools spec. The available values
for type are:
OnDelete,
RollingUpdate,
ReplaceAllOnUpdate, and
RollingUpdateBestEffort.
ReplaceAllOnUpdateAll LogScale pods will be replaced at the same time during an update. Pods will still be replaced one at a time when there are other configuration changes such as updates to pod environment variables. This is the default behavior.
OnDeleteNo LogScale pods will be terminated but new pods will be created with the new spec. Replacing existing pods will require each pod to be deleted by the user.
RollingUpdatePods will always be replaced one pod at a time. There may be some LogScale updates where rolling updates are not supported, so it is not recommended to have this set all the time.
RollingUpdateBestEffortThe operator will evaluate the LogScale version change and determine if the LogScale pods can be updated in a rolling fashion or if they must be replaced at the same time.
spec:
updateStrategy:
type: ReplaceAllOnUpdateCustom Service Accounts
ServiceAccount resources may be
created prior to creating the
HumioCluster resource and then the
HumioCluster may be configured to
use them rather than relying on the Humio Operator to create and manage
the ServiceAccounts and bindings.
These can be configured for the
initServiceAccountName,
authServiceAccountName and
serviceAccountName fields in the
HumioCluster resource. They may be
configured to use a shared
ServiceAccount or separate
ServiceAccounts. It is recommended
to keep these separate unless otherwise required.
Separate Service Accounts
In the following example, we configure all three to use different
ServiceAccount resources.
To do this, create the
ServiceAccount,
ClusterRole,
ClusterRoleBinding for the
initServiceAccount:
apiVersion: v1
kind: ServiceAccount
metadata:
name: humio-init
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: humio-init
rules:
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: humio-init
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: humio-init
subjects:
- kind: ServiceAccount
name: humio-init
namespace: default
Followed by the ServiceAccount,
Role,
RoleBinding for the
authServiceAccount:
apiVersion: v1
kind: ServiceAccount
metadata:
name: humio-auth
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: humio-auth
namespace: default
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- list
- watch
- create
- update
- delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: humio-auth
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: humio-auth
subjects:
- kind: ServiceAccount
name: humio-auth
namespace: default
And finally the ServiceAccount for
the main LogScale container.
Note
Ensure to configure the appropriate annotations if using IRSA.
apiVersion: v1
kind: ServiceAccount
metadata:
name: humio
namespace: default
Now include the following in the
HumioCluster resource so it will
use the ServiceAccounts:
apiVersion: core.humio.com/v1alpha1
kind: HumioCluster
metadata:
name: example-humiocluster
spec:
humioServiceAccountName: humio
initServiceAccountName: humio-init
authServiceAccountName: humio-authHumioExternalCluster
A HumioExternalCluster resource can
be created for a LogScale cluster outside of your Kubernetes
cluster, that was not created using Humio Operator. Once this resource has
been created, you can reference it from various other CRDs. For example,
you could create a HumioAction, and
reference the HumioExternalCluster
from the configuration of the
HumioAction.
apiVersion: core.humio.com/v1alpha1
kind: HumioExternalCluster
metadata:
name: example-humioexternalcluster
labels:
app: 'humioexternalcluster'
app.kubernetes.io/name: 'humioexternalcluster'
app.kubernetes.io/instance: 'example-humioexternalcluster'
app.kubernetes.io/managed-by: 'manual'
spec:
url: "https://example-humiocluster.default:8080/"
apiTokenSecretName: "example-humiocluster-admin-token"
caSecretName: "example-humiocluster"HumioRepository
A HumioRepository resource tells the
Humio Operator to create a LogScale Repository. Any number of
HumioRepository resources may be
created and managed by the Operator.
The content of the yaml file will depend on how the LogScale
Repository should be configured. The following shows an example
HumioRepository resource.
apiVersion: core.humio.com/v1alpha1
kind: HumioRepository
metadata:
name: example-humiorepository
namespace: logging
spec:
managedClusterName: example-humiocluster
name: example-humiorepository
description: "Example LogScale Repository"
retention:
timeInDays: 30
ingestSizeInGB: 50
storageSizeInGB: 10HumioParser
A HumioParser resource tells the
Humio Operator to create a LogScale Parser. Any number of
HumioParser resources may be created
and managed by the Operator.
The content of the yaml file will
depend on how the LogScale Parser should be configured. The
following shows an example of
HumioParser resource.
apiVersion: core.humio.com/v1alpha1
kind: HumioParser
metadata:
name: example-humioparser
namespace: logging
spec:
managedClusterName: example-humiocluster
name: example-humioparser
repositoryName: example-humiorepository
parserScript: |
case {
kubernetes.pod_name=/fluentbit/
| /\[(?<@timestamp>[^\]]+)\]/
| /^(?<@timestamp>.*)\[warn\].*/
| parseTimestamp(format="yyyy/MM/dd' 'HH:mm:ss", field=@timestamp);
parseJson();
* | kvParse()
}
EOFHumioIngestToken
A HumioIngestToken resource tells
the Humio Operator to create a LogScale Ingest Token. Any number
of HumioIngestToken resources may be
created and managed by the Operator.
The content of the yaml file will depend on how the LogScale
Ingest Token should be configured. The following shows an example
HumioIngestToken resource.
apiVersion: core.humio.com/v1alpha1
kind: HumioIngestToken
metadata:
name: example-humioingesttoken
namespace: logging
spec:
managedClusterName: example-humiocluster
name: example-humioingesttoken
repositoryName: example-humiorepository
parserName: example-humioparser
tokenSecretName: example-humioingesttoken-token
By specifying tokenSecretName, the
Humio Operator will export the token as this secret name in Kubernetes. If
you do not wish to have the token exported, omit this field from the spec.
HumioAlert
A HumioAlert resource tells the
Humio Operator to create a LogScale Alert. Any number of
HumioAlert resources may be created
and managed by the Operator.
The content of the yaml file will depend on how the LogScale Alert
should be configured. The following shows an example of a
HumioAlert resource.
apiVersion: core.humio.com/v1alpha1
kind: HumioAlert
metadata:
name: example-alert
spec:
managedClusterName: example-humiocluster
name: example-alert
viewName: humio
query:
queryString: "#repo = humio | error = true | count() | _count > 0"
start: 24h
end: now
isLive: true
throttleTimeMillis: 60000
silenced: false
description: Error counts
actions:
- example-email-actionHumioFilterAlert
A HumioFilterAlert resource tells
the Humio Operator to create a LogScale FilterAlert. Any number of
HumioFilterAlert resources may be
created and managed by the Operator.
The content of the yaml file will depend on how the LogScale
FilterAlert should be configured. The following shows an example of a
HumioFilterAlert resource.
apiVersion: core.humio.com/v1alpha1
kind: HumioFilterAlert
metadata:
name: example-filter-alert
spec:
managedClusterName: example-humiocluster
name: example-filter-alert
viewName: humio
queryString: "#repo = humio | error = true"
throttleTimeSeconds: 3600
throttleField: some-field
enabled: true
description: Error counts
actions:
- example-email-actionHumioAggregateAlert
A HumioAggregateAlert resource tells
the Humio Operator to create a LogScale AggregateAlert. Any number
of HumioAggregateAlert resources may
be created and managed by the Operator.
The content of the yaml file will depend on how the LogScale
AggregateAlert should be configured. The following shows an example of a
HumioAggregateAlert resource.
apiVersion: core.humio.com/v1alpha1
kind: HumioAggregateAlert
metadata:
name: example-aggregate-alert
spec:
managedClusterName: example-humiocluster
name: example-aggregate-alert
viewName: humio
queryString: "#repo = humio | error = true | count()"
queryTimestampType: "EventTimestamp"
throttleTimeSeconds: 300
triggerMode: "CompleteMode"
searchIntervalSeconds: 3600
throttleField: "@timestamp"
description: "This is an example of an aggregate alert"
enabled: true
actions:
- example-email-actionHumioScheduledSearch
A HumioScheduledSearch resource
tells the Humio Operator to create a LogScale ScheduledSearch. Any
number of HumioScheduledSearch
resources may be created and managed by the Operator.
The content of the yaml file will depend on how the LogScale
ScheduledSearch should be configured. The following shows an example of a
HumioScheduledSearch resource.
apiVersion: core.humio.com/v1alpha1
kind: HumioScheduledSearch
metadata:
name: example-scheduled-search
spec:
managedClusterName: example-humiocluster
name: example-scheduled-search
viewName: humio
queryString: "#repo = humio | error = true | count()"
queryStart: "1d"
queryEnd: "now"
schedule: "0 0 * * *"
timeZone: "UTC"
backfillLimit: 3
enabled: true
description: Error counts
actions:
- example-email-actionHumioAction
A HumioAction resource tells the
Humio Operator to create a LogScale Action. Any number of
HumioAction resources may be created
and managed by the Operator.
The content of the yaml file will depend on how the LogScale
Action should be configured. The following shows a examples of different
types of HumioAction resources.
HumioView
HumioView is the Schema for the Humio Views API.
apiVersion: core.humio.com/v1alpha1
kind: HumioView
metadata:
name: example-humioview-managed
spec:
managedClusterName: example-humiocluster
name: "example-view"
connections:
- repositoryName: "example-repository"
filter: "*"Email Action
apiVersion: core.humio.com/v1alpha1
kind: HumioAction
metadata:
name: example-email-action
spec:
managedClusterName: example-humiocluster
name: example-email-action
viewName: humio
emailProperties:
recipients:
- example@example.com
subjectTemplate: "{alert_name} has alerted"
bodyTemplate: |-
{alert_name} has alerted
click {url} to see the alertHumioRepository Action
apiVersion: core.humio.com/v1alpha1
kind: HumioAction
metadata:
name: humio-humio-repository-action
spec:
managedClusterName: example-humiocluster
name: example-humio-repository-action
viewName: humio
humioRepositoryProperties:
ingestToken: some-humio-ingest-tokenOpsGenie Action
apiVersion: core.humio.com/v1alpha1
kind: HumioAction
metadata:
name: example-humioaction
spec:
managedClusterName: example-humiocluster
name: example-ops-genie-action
viewName: humio
opsGenieProperties:
genieKey: "some-genie-key"PagerDuty Action
apiVersion: core.humio.com/v1alpha1
kind: HumioAction
metadata:
name: humio-pagerduty-action
spec:
managedClusterName: example-humiocluster
name: example-pagerduty-action
viewName: humio
pagerDutyProperties:
routingKey: some-routing-key
severity: criticalSlack Post Message Action
apiVersion: core.humio.com/v1alpha1
kind: HumioAction
metadata:
name: humio-slack-post-message-action
spec:
managedClusterName: example-humiocluster
name: example-slack-post-message-action
viewName: humio
slackPostMessageProperties:
apiToken: some-oauth-token
channels:
- "#some-channel"
- "#some-other-channel"
fields:
query: "{query}"
time-interval: "{query_time_interval}"Slack Action
apiVersion: core.humio.com/v1alpha1
kind: HumioAction
metadata:
name: humio-slack-action
spec:
managedClusterName: example-humiocluster
name: example-slack-action
viewName: humio
slackProperties:
url: "https://hooks.slack.com/services/T00000000/B00000000/YYYYYYYYYYYYYYYYYYYYYYYY"
fields:
query: "{query}"
time-interval: "{query_time_interval}"VictorOps Action
apiVersion: core.humio.com/v1alpha1
kind: HumioAction
metadata:
name: humio-victor-ops-action
spec:
managedClusterName: example-humiocluster
name: example-victor-ops-action
viewName: humio
victorOpsProperties:
messageType: critical
notifyUrl: "https://alert.victorops.com/integrations/0000/alert/0000/routing_key"Webhook Action
apiVersion: core.humio.com/v1alpha1
kind: HumioAction
metadata:
name: humio-web-hook-action-managed
spec:
managedClusterName: example-humiocluster
name: example-web-hook-action
viewName: humio
webhookProperties:
url: "https://example.com/some/api"
headers:
some: header
some-other: header
method: POST
bodyTemplate: |-
{alert_name} has alerted
click {url} to see the alertHumioIPFilter
A HumioIPFilter resource tells the
Humio Operator to create a LogScale IPFilter. Any number of
HumioIPFilter resources may be
created and managed by the Operator.
The content of the yaml file will depend on how the LogScale
IPFilter should be configured. The following shows an example of a
HumioIPFilter resource.
apiVersion: core.humio.com/v1alpha1
kind: HumioIPFilter
metadata:
name: humioipfilter-sample
spec:
managedClusterName: example-humiocluster
name: example-ipfilter
ipFilter:
- action: deny
address: 192.168.1.24
- action: allow
address: allHumioViewToken
A HumioViewToken resource tells the
Humio Operator to create a LogScale ViewPermissionToken. Any
number of HumioViewToken resources
may be created and managed by the Operator.
The content of the yaml file will depend on how the LogScale
ViewPermissionToken should be configured. The following shows an example
of a HumioViewToken resource.
apiVersion: core.humio.com/v1alpha1
kind: HumioViewToken
metadata:
name: humioviewtoken-sample
spec:
managedClusterName: humiocluster
name: example-view-token
viewNames:
- view-1
- view-2
permissions:
- ReadAccess
tokenSecretName: view-secrettokenHumioSystemToken
A HumioSystemToken resource tells
the Humio Operator to create a LogScale SystemPermissionToken. Any
number of HumioSystemToken resources
may be created and managed by the Operator.
The content of the yaml file will depend on how the LogScale
SystemPermissionToken should be configured. The following shows an example
of a HumioSystemToken resource.
apiVersion: core.humio.com/v1alpha1
kind: HumioSystemToken
metadata:
name: humiosystemtoken-sample
spec:
managedClusterName: humiocluster
name: example-system-token
permissions:
- ReadHealthCheck
- ViewOrganizations
- ChangeUsername
tokenSecretName: system-secrettokenHumioOrganizationToken
A HumioOrganizationToken resource
tells the Humio Operator to create a LogScale
OrganizationPermissionToken. Any number of
HumioOrganizationToken resources may
be created and managed by the Operator.
The content of the yaml file will depend on how the LogScale
OrganizationPermissionToken should be configured. The following shows an
example of a HumioOrganizationToken
resource.
apiVersion: core.humio.com/v1alpha1
kind: HumioOrganizationToken
metadata:
labels:
name: humioorganizationtoken-sample
spec:
managedClusterName: humiocluster
name: example-organization-token
permissions:
- CreateRepository
- ManageUsers
tokenSecretName: organization-secrettokenHumioPdfRenderService
A HumioPdfRenderService resource
tells the Humio Operator to deploy the PDF Rendering Service
apiVersion: core.humio.com/v1alpha1
kind: HumioPdfRenderService
metadata:
name: pdf-render-service
namespace: logging
spec:
# TLS configuration shared with the HumioCluster CA secret managed by cert-manager.
# The example HumioCluster named "example-humiocluster" produces the example-humiocluster-ca-keypair secret.
tls:
enabled: true
caSecretName: example-humiocluster-ca-keypair
image: humio/pdf-render-service:0.1.3--build-108--sha-4b505137e430cf9dfe02341d51f0f298af3c89f6
replicas: 2
port: 5123
serviceType: ClusterIP
environmentVariables:
- name: XDG_CONFIG_HOME
value: /tmp/.chromium-config
- name: XDG_CACHE_HOME
value: /tmp/.chromium-cache
- name: LOG_LEVEL
value: "debug"
- name: CLEANUP_INTERVAL
value: "600"
# TLS-related env vars are injected automatically when spec.tls.enabled=true.
resources:
limits:
cpu: "1"
memory: "2Gi"
requests:
cpu: "1"
memory: "1Gi"
# Readiness probe configuration
readinessProbe:
httpGet:
path: /ready
port: 5123
initialDelaySeconds: 30
periodSeconds: 15
timeoutSeconds: 60
failureThreshold: 1
successThreshold: 1
# Liveness probe configuration
livenessProbe:
httpGet:
path: /health
port: 5123
initialDelaySeconds: 30
periodSeconds: 15
timeoutSeconds: 60
failureThreshold: 5
successThreshold: 1
# Node affinity configuration
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: humio_node_type
operator: In
values:
- core
# Add annotations for service
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: /metrics
prometheus.io/port: "5123"
# Volume mounts for the container
volumeMounts:
- name: app-temp
mountPath: /app/temp
- name: tmp
mountPath: /tmp
# Volumes for the pod
volumes:
- name: app-temp
emptyDir:
medium: Memory
- name: tmp
emptyDir:
medium: Memory
# Container security context
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
runAsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
# Pod security context (empty in the example)
podSecurityContext: {}