Running Humio on Kubernetes
Installation of Humio using the Kubernetes Helm Charts are deprecated. Refer instead to the Installation using the Humio Operator.
If you are looking for information about shipping data from a Kubernetes cluster to Humio without running Humio in Kubernetes, please see our Kubernetes platform documentation.
Installation using Helm
Directions for installing Helm for your particular OS flavor can be found on the Helm GitHub page.
Once that is done, it will be necessary to update the main Helm chart repository. This main repository contains subcharts for Humio.
We depend on the Confluent Helm Charts as dependencies, which are included automatically when running the installation below.
helm repo add humio https://humio.github.io/humio-helm-charts
helm repo update
Now create a values.yaml
file.
Adjust the version, resources and JVM memory as appropriate for the
nodes on which the pods will be scheduled. The
jvm.xmx
and
jvm.maxDirectMemorySize
should
each be half of the allocated memory.
humio-core:
enabled: true
# The number of Humio pods
replicas: 3
# Use a custom version of Humio.
image: humio/humio-core:<version>
# Custom partitions
ingest:
initialPartitionsPerNode: 4
storage:
initialPartitionsPerNode: 4
# Custom CPU/Memory resources
resources:
limits:
cpu: 30
memory: 220Gi
requests:
cpu: 28
memory: 220Gi
# Custom JVM memory settings (these will depend on resources defined)
jvm:
xss: 2m
xms: 4g
xmx: 110g
maxDirectMemorySize: 110g
extraArgs: -XX:+UseParallelGC -XX:+UnlockDiagnosticVMOptions -XX:CompileCommand=dontinline,com/humio/util/HotspotUtilsJ.dontInline -Xlog:gc+jni=debug:stdout -Dakka.log-config-on-start=on -Xlog:gc*:stdout:time,tags
affinity:
# Affinity policy to prevent multiple Humio pods per node (recommended)
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- humio-core
topologyKey: "kubernetes.io/hostname"
global:
sharedTokens:
fluentbit: {kubernetes: in-cluster}
These settings will tell Helm to create a default three-node Humio
cluster with Kafka and Zookeeper. It will also create a Fluent Bit
daemonset that will collect logs from any pods running in the
Kubernetes cluster, and autodiscover the Humio endpoint and token.
We recommend installing Humio into its own namespace; in this
example we're using the
logging
namespace
helm install humio humio/humio-helm-charts \
--namespace logging \
--values values.yaml
Log-In after Installation
There are a few ways to get the URL for a Humio cluster. In most cases, grabbing the load balancer URL is sufficient
kubectl get service humio-humio-core-http -n logging -o go-template --template='http://{{(index .status.loadBalancer.ingress 0 ).ip}}:8080'
If you're running in Minikube, run this command instead:
minikube service humio-humio-core-http -n logging --url
If
humio-core.authenticationMethod
is set to single-user
(default), then you need to supply a username and password when
logging in. The default username is
developer
and the password
can be retrieved from the command
kubectl get secret developer-user-password -n logging -o=template --template={{.data.password}} | base64 -D
The base64 command may vary depending on OS and distribution.
For a full list of customizations, reference the Helm chart.
Upgrading with Kubernetes
To update using a non-rolling strategy, you'll have to delete the Humio pods temporarily and then bring them back with the new version.
Update the
values.yaml
with the new versionDelete the pods by running
kubectl delete statefulset humio-humio-core -n logging
Re-create the statefulset/pods with the new version by running
helm upgrade --values values.yaml humio humio/humio-helm-charts
To update using a rolling strategy, you can rely on the statefulset to deploy the changes.
Uninstalling
If you decide to uninstall Helm Chart, you can do so by executing the following from the command-line:
helm delete --purge humio
kubectl delete namespace logging --cascade=true
Warning
Uninstalling like this will destroy all Humio data.