Humio Operator Resource Management
The Humio Operator manages a number of LogScale components such as Repositories, Parsers and Ingest Tokens.
After installing the Operator by following the
Operator
Installation Guide, a LogScale Cluster resource (referred to as
HumioCluster
) can be created along
with a number of other resource types.
Creating the Resource
Any of the resources can be created by applying the yaml via
kubectl. First, create a
resource.yaml
file with the
desired content, and then run:
$ kubectl create -f ./resource.yaml
The content of the yaml file may contain any number of resources. The full list of resources and examples are below.
HumioCluster
A HumioCluster
resource tells the LogScale
Operator to create a LogScale Cluster. Multiple creations of
HumioCluster
may be done and managed by the
Operator.
The content of the yaml file will depend on how the LogScale Cluster should be configured to run. The next parts of this document explain some common cluster configurations.
Ephemeral with S3 Storage
A highly recommended LogScale Cluster configuration is to run in ephemeral mode, using S3 for persistent storage. The LogScale pods will be configured with hostPath, which mounts a directory from the host machine to the pod which is local storage, and ideally made up of NVME SSDs. This configuration also has fairly high resource limits and affinity policies that ensure no two LogScale pods may be scheduled on the same host. This is an ideal storage configuration for production workloads running in AWS.
apiVersion: core.humio.com/v1alpha1
kind: HumioCluster
metadata:
name: example-humiocluster
spec:
image: "humio/humio-core:1.36.0"
targetReplicationFactor: 2
storagePartitionsCount: 24
digestPartitionsCount: 24
resources:
limits:
cpu: "8"
memory: 56Gi
requests:
cpu: "6"
memory: 52Gi
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: humio_node_type
operator: In
values:
- core
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- humio
topologyKey: kubernetes.io/hostname
dataVolumeSource:
hostPath:
path: "/mnt/disks/vol1"
type: "Directory"
environmentVariables:
- name: S3_STORAGE_BUCKET
value: "my-cluster-storage"
- name: S3_STORAGE_REGION
value: "us-west-2"
- name: S3_STORAGE_ENCRYPTION_KEY
value: "my-encryption-key"
- name: USING_EPHEMERAL_DISKS
value: "true"
- name: S3_STORAGE_PREFERRED_COPY_SOURCE
value: "true"
- name: "ZOOKEEPER_URL"
value: "z-2-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:2181,z-3-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:2181,z-1-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:2181"
- name: "KAFKA_SERVERS"
value: "b-2-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:9092,b-1-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:9092,b-3-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:9092"
Ephemeral with GCS Storage
A highly recommended LogScale Cluster configuration is to run in ephemeral mode, using GCS for persistent storage. The LogScale pods will be configured with hostPath, which mounts a directory from the host machine to the pod which is local storage, and ideally made up of NVME SSDs. This configuration also has fairly high resource limits and affinity policies that ensure no two LogScale pods may be scheduled on the same host. This is an ideal storage configuration production workloads running in GCP.
apiVersion: core.humio.com/v1alpha1
kind: HumioCluster
metadata:
name: example-humiocluster
spec:
image: humio-core:dynxml_current_release
targetReplicationFactor: 2
storagePartitionsCount: 24
digestPartitionsCount: 24
resources:
limits:
cpu: "8"
memory: 56Gi
requests:
cpu: "6"
memory: 52Gi
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: humio_node_type
operator: In
values:
- core
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- humio-core
topologyKey: kubernetes.io/hostname
dataVolumeSource:
hostPath:
path: "/mnt/disks/vol1"
type: "Directory"
extraHumioVolumeMounts:
- name: gcp-storage-account-json-file
mountPath: /var/lib/humio/gcp-storage-account-json-file
subPath: gcp-storage-account-json-file
readOnly: true
extraVolumes:
- name: gcp-storage-account-json-file
secret:
secretName: gcp-storage-account-json-file
environmentVariables:
- name: GCP_STORAGE_ACCOUNT_JSON_FILE
value: "/var/lib/humio/gcp-storage-account-json-file"
- name: GCP_STORAGE_BUCKET
value: "my-cluster-storage"
- name: GCP_STORAGE_ENCRYPTION_KEY
value: "my-encryption-key"
- name: USING_EPHEMERAL_DISKS
value: "true"
- name: "ZOOKEEPER_URL"
value: "z-2-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:2181,z-3-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:2181,z-1-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:2181"
- name: "KAFKA_SERVERS"
value: "b-2-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:9092,b-1-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:9092,b-3-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:9092"
Nginx Ingress with Cert Manager
Configuring Ingress with Cert Manager will ensure you have an Ingress resource that can be used to access the cluster, along with a valid cert provided by Cert Manager.
Note
Ingress is currently not supported for Node Pools.
apiVersion: core.humio.com/v1alpha1
kind: HumioCluster
metadata:
name: example-humiocluster
spec:
image: "humio/humio-core:1.36.0"
environmentVariables:
- name: "ZOOKEEPER_URL"
value: "humio-cp-zookeeper-0.humio-cp-zookeeper-headless:2181"
- name: "KAFKA_SERVERS"
value: "humio-cp-kafka-0.humio-cp-kafka-headless:9092"
hostname: "humio.example.com"
esHostname: "humio-es.example.com"
ingress:
enabled: true
controller: nginx
annotations:
use-http01-solver: "true"
cert-manager.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
Nginx Ingress with Custom Path
If the case where you want to run LogScale under a custom path.
Note
Ingress is currently not supported for Node Pools.
apiVersion: core.humio.com/v1alpha1
kind: HumioCluster
metadata:
name: example-humiocluster
spec:
image: "humio/humio-core:1.36.0"
environmentVariables:
- name: "ZOOKEEPER_URL"
value: "humio-cp-zookeeper-0.humio-cp-zookeeper-headless:2181"
- name: "KAFKA_SERVERS"
value: "humio-cp-kafka-0.humio-cp-kafka-headless:9092"
hostname: "humio.example.com"
esHostname: "humio-es.example.com"
path: /logs
ingress:
enabled: true
controller: nginx
HumioCluster with Hostname References
In the case where
spec.hostname
and/or
spec.esHostname
cannot be
managed in the HumioCluster resource, it's possible to use a
reference to an external source for either. Currently
secretKeyRef
is supported.
To use secretKeyRef
containing
the hostname
, create the
secret as shown below, where
<hostname>
is the
hostname for the HumioCluster:
$ kubectl create secret generic <cluster-name>-hostname --from-literal=data=<hostname> -n <namespace>
You would then update the HumioCluster resource to use the hostname reference:
spec:
hostnameSource:
secretKeyRef:
name: <cluster-name>-hostname
key: data
To use ``secretKeyRef`` containing the ``esHostname``, create the secret as shown below, where ``<es-hostname>`` is the es-hostname for the HumioCluster:
$ kubectl create secret generic <cluster-name>-es-hostname --from-literal=data=<es-hostname> -n <namespace>
You would then update the HumioCluster resource to use the esHostname reference:
spec:
esHostnameSource:
secretKeyRef:
name: <cluster-name>-es-hostname
key: data
Persistent Volumes
It's possible to use Persistent Volumes as the backing store for LogScale data. This can be used as an alternative to Bucket Storage, however, Persistent Volumes using network block storage are significantly slower than local disks and LogScale will not perform well on this medium. For using Persistent Volumes using local storage, see Local Persistent Volumes (Beta).
apiVersion: core.humio.com/v1alpha1
kind: HumioCluster
metadata:
name: example-humiocluster
spec:
image: "humio/humio-core:1.36.0"
license:
secretKeyRef:
# Secret must be created with the following command: kubectl create secret generic example-humiocluster-license --from-literal=data=<license>
name: example-humiocluster-license
key: data
targetReplicationFactor: 2
storagePartitionsCount: 24
digestPartitionsCount: 24
resources:
limits:
cpu: "8"
memory: 56Gi
requests:
cpu: "6"
memory: 52Gi
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: humio_node_type
operator: In
values:
- core
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- humio-core
topologyKey: kubernetes.io/hostname
dataVolumePersistentVolumeClaimSpecTemplate:
storageClassName: standard
accessModes: [ReadWriteOnce]
resources:
requests:
storage: 500Gi
environmentVariables:
- name: "ZOOKEEPER_URL"
value: "z-2-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:2181,z-3-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:2181,z-1-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:2181"
- name: "KAFKA_SERVERS"
value: "b-2-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:9092,b-1-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:9092,b-3-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:9092"
Local Persistent Volumes (Beta)
It's possible to use local Persistent Volumes as the backing store
for LogScale data, using something like
Local
Persistent Volumes. This can be used in combination with
Bucket Storage, as the operator can be configured to clean up local
Persistent Volumes that are attached to a node when that node is
removed from the Kubernetes cluster. For this reason, it is
extremely important to use USING_EPHEMERAL_DISKS=true
along with Bucket Storage when using this option. This cleanup
setting is enabled in the following example by setting
dataVolumePersistentVolumeClaimPolicy.reclaimType=OnNodeDelete
.
apiVersion: core.humio.com/v1alpha1
kind: HumioCluster
metadata:
name: example-humiocluster
spec:
image: "humio/humio-core:1.36.0"
license:
secretKeyRef:
# Secret must be created with the following command: kubectl create secret generic example-humiocluster-license --from-literal=data=<license>
name: example-humiocluster-license
key: data
targetReplicationFactor: 2
storagePartitionsCount: 24
digestPartitionsCount: 24
resources:
limits:
cpu: "8"
memory: 56Gi
requests:
cpu: "6"
memory: 52Gi
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: humio_node_type
operator: In
values:
- core
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- humio-core
topologyKey: kubernetes.io/hostname
dataVolumePersistentVolumeClaimSpecTemplate:
storageClassName: local-storage
accessModes: [ReadWriteOnce]
resources:
requests:
storage: 500Gi
dataVolumePersistentVolumeClaimPolicy:
reclaimType: OnNodeDelete
environmentVariables:
- name: S3_STORAGE_BUCKET
value: "my-cluster-storage"
- name: S3_STORAGE_REGION
value: "us-west-2"
- name: S3_STORAGE_ENCRYPTION_KEY
value: "my-encryption-key"
- name: USING_EPHEMERAL_DISKS
value: "true"
- name: S3_STORAGE_PREFERRED_COPY_SOURCE
value: "true"
- name: "ZOOKEEPER_URL"
value: "z-2-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:2181,z-3-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:2181,z-1-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:2181"
- name: "KAFKA_SERVERS"
value: "b-2-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:9092,b-1-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:9092,b-3-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:9092"
Node Pools (Beta)
Multiple groups of LogScale nodes may be run as part of the HumioCluster. An example of this may include a node pool for ingest-only nodes and a node pool for digest and storage nodes.
apiVersion: core.humio.com/v1alpha1
kind: HumioCluster
metadata:
name: example-humiocluster
spec:
targetReplicationFactor: 2
storagePartitionsCount: 24
digestPartitionsCount: 24
license:
secretKeyRef:
name: example-humiocluster-license
key: data
nodePools:
- name: digest-storage
spec:
image: "humio/humio-core:1.36.0"
nodeCount: 3
dataVolumeSource:
hostPath:
path: "/mnt/disks/vol1"
type: "Directory"
resources:
limits:
cpu: "8"
memory: 56Gi
requests:
cpu: "6"
memory: 52Gi
environmentVariables:
- name: "ZOOKEEPER_URL"
value: "z-2-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:2181,z-3-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:2181,z-1-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:2181"
- name: "KAFKA_SERVERS"
value: "b-2-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:9092,b-1-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:9092,b-3-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:9092"
- name: ingest-only
spec:
image: "humio/humio-core:1.36.0"
nodeCount: 3
dataVolumePersistentVolumeClaimSpecTemplate:
storageClassName: standard
accessModes: [ReadWriteOnce]
resources:
requests:
storage: 10Gi
resources:
limits:
cpu: "8"
memory: 56Gi
requests:
cpu: "6"
memory: 52Gi
environmentVariables:
- name: NODE_ROLES
value: "httponly"
- name: "ZOOKEEPER_URL"
value: "z-2-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:2181,z-3-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:2181,z-1-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:2181"
- name: "KAFKA_SERVERS"
value: "b-2-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:9092,b-1-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:9092,b-3-my-zookeeper.c4.kafka.us-west-2.amazonaws.com:9092"
Configuring TLS with Operator
By default, TLS is enabled on each LogScale pod. This is recommended, however, in some cases you may want TLS to be disabled. To do this, use the below configuration.
If TLS is enabled here, it is assumed that TLS is also used for the connection to Kafka. If TLS on the LogScale pods is disabled but the connection to Kafka should use SSL, then Kafka will need to be configured explicitly to use SSL.
spec:
tls:
enabled: false
Additional Kafka Configuration with Operator
Extra Kafka configs can be set and used by the LogScale pods. This is mainly used to toggle TLS when communicating with Kafka. To enable TLS for example, set the configuration below.
SSL is enabled by default when using TLS for the LogScale pods. See Configuring TLS with Operator.
spec:
extraKafkaConfigs: "security.protocol=SSL"
ZooKeeper Deployment in Kubernetes and Operator
Available: LogScale & ZooKeeper v1.108.0
The requirement for LogScale to use ZooKeeper was removed in LogScale 1.108.0. ZooKeeper may still be required by Kafka. Please refer to your chosen Kafka deployment documentation for details.
When TLS is enabled for LogScale, TLS is by default also enabled for
connections to ZooKeeper. In some cases, such as with
MSK, TLS will be
enabled for the Kafka brokers but not for ZooKeeper. To disable TLS
for ZooKeeper, set the following in values for the
HUMIO_OPTS
environment variable:
-Dzookeeper.client.secure=false
.
Authentication - SAML with Operator
When using SAML, it's necessary to follow the SAML Authentication and once the IDP certificate is obtained, you must create a secret containing that certificate using kubectl.
$ kubectl create secret generic <cluster-name>-idp-certificate --from-file=idp-certificate.pem=./my-idp-certificate.pem -n <namespace>
Once the secret has been created, a configuration similar to below can be added to enable SAML, adjusting for your cluster URL and IDP token.
spec:
environmentVariables:
- name: AUTHENTICATION_METHOD
value: saml
- name: AUTO_CREATE_USER_ON_SUCCESSFUL_LOGIN
value: "true"
- name: PUBLIC_URL
value: https://my-cluster.example.com
- name: SAML_IDP_SIGN_ON_URL
value: https://accounts.google.com/o/saml2/idp?idpid=idptoken
- name: SAML_IDP_ENTITY_ID
value: https://accounts.google.com/o/saml2/idp?idpid=idptoken
Authentication - Single User
If running LogScale in single user mode, you will need to set a
password for the user
user. This
can be done via a plain text environment variable or using a
Kubernetes secret that is referenced by an environment variable. If
supplying a secret, you must populate this secret prior to creating
the HumioCluster
resource
otherwise the pods will fail to start.
By setting a password using an environment variable plain text value:
spec:
environmentVariables:
- name: "SINGLE_USER_PASSWORD"
value: "MyVeryS3cretPassword"
By setting a password using an environment variable secret reference:
spec:
environmentVariables:
- name: "SINGLE_USER_PASSWORD"
valueFrom:
secretKeyRef:
name: developer-user-password
key: password
License Management with Operator
LogScale licenses can be managed with the operator. In order to do so,
a Kubernetes secret must be created which contains the value of the
license. First create the secret as shown below, where
<license>
is the license
content that is obtained from LogScale:
$ kubectl create secret generic <cluster-name>-license --from-literal=data=<license> -n <namespace>
And then update the HumioCluster resource to use the secret reference:
spec:
license:
secretKeyRef:
name: <cluster-name>-license
key: data
Update Strategy
HumioCluster resources may be configured with an Update Strategy. The
updateStrategy.type
controls how the operator
restarts LogScale pods in response to a image change to the
HumioCluster
spec or nodePools
spec.
The available values for type
are:
OnDelete
, RollingUpdate
,
ReplaceAllOnUpdate
, and
RollingUpdateBestEffort
.
ReplaceAllOnUpdate
All LogScale pods will be replaced at the same time during an update. Pods will still be replaced one at a time when there are other configuration changes such as updates to pod environment variables. This is the default behavior.
OnDelete
No LogScale pods will be terminated but new pods will be created with the new spec. Replacing existing pods will require each pod to be deleted by the user.
RollingUpdate
Pods will always be replaced one pod at a time. There may be some LogScale updates where rolling updates are not supported, so it is not recommended to have this set all the time.
RollingUpdateBestEffort
The operator will evaluate the LogScale version change and determine if the LogScale pods can be updated in a rolling fashion or if they must be replaced at the same time.
spec:
updateStrategy:
type: ReplaceAllOnUpdate
Custom Service Accounts
ServiceAccount
resources may
be created prior to creating the
HumioCluster
resource and then
the HumioCluster
may be
configured to use them rather than relying on the Humio Operator to
create and manage the
ServiceAccounts
and bindings.
These can be configured for the
initServiceAccountName
,
authServiceAccountName
and
serviceAccountName
fields in
the HumioCluster
resource.
They may be configured to use a shared
ServiceAccount
or separate
ServiceAccounts
. It is
recommended to keep these separate unless otherwise required.
Separate Service Accounts
In the following example, we configure all three to use different
ServiceAccount
resources.
To do this, create the
ServiceAccount
,
ClusterRole
,
ClusterRoleBinding
for the
initServiceAccount
:
apiVersion: v1
kind: ServiceAccount
metadata:
name: humio-init
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: humio-init
rules:
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: humio-init
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: humio-init
subjects:
- kind: ServiceAccount
name: humio-init
namespace: default
Followed by the
ServiceAccount
,
Role
,
RoleBinding
for the
authServiceAccount
:
apiVersion: v1
kind: ServiceAccount
metadata:
name: humio-auth
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: humio-auth
namespace: default
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- list
- watch
- create
- update
- delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: humio-auth
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: humio-auth
subjects:
- kind: ServiceAccount
name: humio-auth
namespace: default
And finally the ServiceAccount
for the main LogScale container.
Note
Ensure to configure the appropriate annotations if using IRSA.
apiVersion: v1
kind: ServiceAccount
metadata:
name: humio
namespace: default
Now include the following in the HumioCluster
resource so it will use the
ServiceAccounts
:
apiVersion: core.humio.com/v1alpha1
kind: HumioCluster
metadata:
name: example-humiocluster
spec:
humioServiceAccountName: humio
initServiceAccountName: humio-init
authServiceAccountName: humio-auth
HumioRepository
A HumioRepository
resource
tells the Humio Operator to create a LogScale Repository. Any number
of HumioRepository
resources
may be created and managed by the Operator.
The content of the yaml file will depend on how the LogScale
Repository should be configured. The following shows an example
HumioRepository
resource.
apiVersion: core.humio.com/v1alpha1
kind: HumioRepository
metadata:
name: example-humiorepository
namespace: logging
spec:
managedClusterName: example-humiocluster
name: example-humiorepository
description: "Example LogScale Repository"
retention:
timeInDays: 30
ingestSizeInGB: 50
storageSizeInGB: 10
HumioParser
A HumioParser
resource tells
the Humio Operator to create a LogScale Parser. Any number of
HumioParser
resources may be
created and managed by the Operator.
The content of the yaml
file
will depend on how the LogScale Parser should be configured. The
following shows an example of
HumioParser
resource.
apiVersion: core.humio.com/v1alpha1
kind: HumioParser
metadata:
name: example-humioparser
namespace: logging
spec:
managedClusterName: example-humiocluster
name: example-humioparser
repositoryName: example-humiorepository
parserScript: |
case {
kubernetes.pod_name=/fluentbit/
| /\[(?<@timestamp>[^\]]+)\]/
| /^(?<@timestamp>.*)\[warn\].*/
| parseTimestamp(format="yyyy/MM/dd' 'HH:mm:ss", field=@timestamp);
parseJson();
* | kvParse()
}
EOF
HumioIngestToken
A HumioIngestToken
resource
tells the Humio Operator to create a LogScale Ingest Token. Any
number of HumioIngestToken
resources may be created and managed by the Operator.
The content of the yaml file will depend on how the LogScale Ingest
Token should be configured. The following shows an example
HumioIngestToken
resource.
apiVersion: core.humio.com/v1alpha1
kind: HumioIngestToken
metadata:
name: example-humioingesttoken
namespace: logging
spec:
managedClusterName: example-humiocluster
name: example-humioingesttoken
repositoryName: example-humiorepository
parserName: example-humioparser
tokenSecretName: example-humioingesttoken-token
By specifying tokenSecretName
,
the Humio Operator will export the token as this secret name in
Kubernetes. If you do not wish to have the token exported, omit this
field from the spec.
HumioAlert
A HumioAlert
resource tells
the Humio Operator to create a LogScale Alert. Any number of
HumioAlert
resources may be
created and managed by the Operator.
The content of the yaml file will depend on how the LogScale Alert
should be configured. The following shows an example of a
HumioAlert
resource.
apiVersion: core.humio.com/v1alpha1
kind: HumioAlert
metadata:
name: example-alert
spec:
managedClusterName: example-humiocluster
name: example-alert
viewName: humio
query:
queryString: "#repo = humio | error = true | count() | _count > 0"
start: 24h
end: now
isLive: true
throttleTimeMillis: 60000
silenced: false
description: Error counts
actions:
- example-email-action
HumioFilterAlert
A HumioFilterAlert
resource
tells the Humio Operator to create a LogScale FilterAlert. Any
number of HumioFilterAlert
resources may be created and managed by the Operator.
The content of the yaml file will depend on how the LogScale
FilterAlert should be configured. The following shows an example of
a HumioFilterAlert
resource.
apiVersion: core.humio.com/v1alpha1
kind: HumioFilterAlert
metadata:
name: example-filter-alert
spec:
managedClusterName: example-humiocluster
name: example-filter-alert
viewName: humio
queryString: "#repo = humio | error = true"
throttleTimeSeconds: 3600
throttleField: some-field
enabled: true
description: Error counts
actions:
- example-email-action
HumioAggregateAlert
A HumioAggregateAlert
resource
tells the Humio Operator to create a LogScale AggregateAlert. Any
number of HumioAggregateAlert
resources may be created and managed by the Operator.
The content of the yaml file will depend on how the LogScale
AggregateAlert should be configured. The following shows an example
of a HumioAggregateAlert
resource.
apiVersion: core.humio.com/v1alpha1
kind: HumioAggregateAlert
metadata:
name: example-aggregate-alert
spec:
managedClusterName: example-humiocluster
name: example-aggregate-alert
viewName: humio
queryString: "#repo = humio | error = true | count()"
queryTimestampType: "EventTimestamp"
throttleTimeSeconds: 300
triggerMode: "CompleteMode"
searchIntervalSeconds: 3600
throttleField: "@timestamp"
description: "This is an example of an aggregate alert"
enabled: true
actions:
- example-email-action
HumioScheduledSearch
A HumioScheduledSearch
resource tells the Humio Operator to create a LogScale
ScheduledSearch. Any number of
HumioScheduledSearch
resources
may be created and managed by the Operator.
The content of the yaml file will depend on how the LogScale
ScheduledSearch should be configured. The following shows an example
of a HumioScheduledSearch
resource.
apiVersion: core.humio.com/v1alpha1
kind: HumioScheduledSearch
metadata:
name: example-scheduled-search
spec:
managedClusterName: example-humiocluster
name: example-scheduled-search
viewName: humio
queryString: "#repo = humio | error = true | count()"
queryStart: "1d"
queryEnd: "now"
schedule: "0 0 * * *"
timeZone: "UTC"
backfillLimit: 3
enabled: true
description: Error counts
actions:
- example-email-action
HumioAction
A HumioAction
resource tells
the Humio Operator to create a LogScale Action. Any number of
HumioAction
resources may be
created and managed by the Operator.
The content of the yaml file will depend on how the LogScale Action
should be configured. The following shows a examples of different
types of HumioAction
resources.
Email Action
apiVersion: core.humio.com/v1alpha1
kind: HumioAction
metadata:
name: example-email-action
spec:
managedClusterName: example-humiocluster
name: example-email-action
viewName: humio
emailProperties:
recipients:
- example@example.com
subjectTemplate: "{alert_name} has alerted"
bodyTemplate: |-
{alert_name} has alerted
click {url} to see the alert
HumioRepository Action
apiVersion: core.humio.com/v1alpha1
kind: HumioAction
metadata:
name: humio-humio-repository-action
spec:
managedClusterName: example-humiocluster
name: example-humio-repository-action
viewName: humio
humioRepositoryProperties:
ingestToken: some-humio-ingest-token
OpsGenie Action
apiVersion: core.humio.com/v1alpha1
kind: HumioAction
metadata:
name: example-humioaction
spec:
managedClusterName: example-humiocluster
name: example-ops-genie-action
viewName: humio
opsGenieProperties:
genieKey: "some-genie-key"
PagerDuty Action
apiVersion: core.humio.com/v1alpha1
kind: HumioAction
metadata:
name: humio-pagerduty-action
spec:
managedClusterName: example-humiocluster
name: example-pagerduty-action
viewName: humio
pagerDutyProperties:
routingKey: some-routing-key
severity: critical
Slack Post Message Action
apiVersion: core.humio.com/v1alpha1
kind: HumioAction
metadata:
name: humio-slack-post-message-action
spec:
managedClusterName: example-humiocluster
name: example-slack-post-message-action
viewName: humio
slackPostMessageProperties:
apiToken: some-oauth-token
channels:
- "#some-channel"
- "#some-other-channel"
fields:
query: "{query}"
time-interval: "{query_time_interval}"
Slack Action
apiVersion: core.humio.com/v1alpha1
kind: HumioAction
metadata:
name: humio-slack-action
spec:
managedClusterName: example-humiocluster
name: example-slack-action
viewName: humio
slackProperties:
url: "https://hooks.slack.com/services/T00000000/B00000000/YYYYYYYYYYYYYYYYYYYYYYYY"
fields:
query: "{query}"
time-interval: "{query_time_interval}"
VictorOps Action
apiVersion: core.humio.com/v1alpha1
kind: HumioAction
metadata:
name: humio-victor-ops-action
spec:
managedClusterName: example-humiocluster
name: example-victor-ops-action
viewName: humio
victorOpsProperties:
messageType: critical
notifyUrl: "https://alert.victorops.com/integrations/0000/alert/0000/routing_key"
Webhook Action
apiVersion: core.humio.com/v1alpha1
kind: HumioAction
metadata:
name: humio-web-hook-action-managed
spec:
managedClusterName: example-humiocluster
name: example-web-hook-action
viewName: humio
webhookProperties:
url: "https://example.com/some/api"
headers:
some: header
some-other: header
method: POST
bodyTemplate: |-
{alert_name} has alerted
click {url} to see the alert