Use Case: Running LogScale on Kubernetes

Deprecated: Deprecated 1.16 v1.16

Installation of LogScale using the Kubernetes Helm Charts are deprecated. Refer instead to the Installation using the LogScale Operator.

If you are looking for information about shipping data from a Kubernetes cluster to LogScale without running LogScale in Kubernetes, please see our Kubernetes platform documentation.

Installation using Helm

Directions for installing Helm for your particular OS flavor can be found on the Helm GitHub page.

Once that is done, it will be necessary to update the main Helm chart repository. This main repository contains subcharts for LogScale.

We depend on the Confluent Helm Charts as dependencies, which are included automatically when running the installation below.

$ helm repo add humio
$ helm repo update

Now create a values.yaml file. Adjust the version, resources and JVM memory as appropriate for the nodes on which the pods will be scheduled. The jvm.xmx and jvm.maxDirectMemorySize should each be half of the allocated memory.

  enabled: true

  # The number of LogScale pods
  replicas: 3

  # Use a custom version of LogScale.
  image: humio/humio-core:<version>

  # Custom partitions
    initialPartitionsPerNode: 4
    initialPartitionsPerNode: 4

  # Custom CPU/Memory resources
    cpu: 30
    memory: 220Gi
    cpu: 28
    memory: 220Gi

  # Custom JVM memory settings (these will depend on resources defined)
    xss: 2m
    xms: 4g
    xmx: 110g
    maxDirectMemorySize: 110g
    extraArgs: -XX:+UseParallelGC -XX:+UnlockDiagnosticVMOptions -XX:CompileCommand=dontinline,com/humio/util/HotspotUtilsJ.dontInline -Xlog:gc+jni=debug:stdout -Dakka.log-config-on-start=on -Xlog:gc*:stdout:time,tags

    # Affinity policy to prevent multiple LogScale pods per node (recommended)
      - labelSelector:
          - key: app
            operator: In
            - humio-core
        topologyKey: ""
    fluentbit: {kubernetes: in-cluster}

These settings will tell Helm to create a default three-node LogScale cluster with Kafka and ZooKeeper. It will also create a Fluent Bit daemonset that will collect logs from any pods running in the Kubernetes cluster, and autodiscover the LogScale endpoint and token. We recommend installing LogScale into its own namespace; in this example we're using the logging namespace

$ helm install humio humio/humio-helm-charts \
   --namespace logging \
   --values values.yaml
Log-In after Installation

There are a few ways to get the URL for a LogScale cluster. In most cases, grabbing the load balancer URL is sufficient

$ kubectl get service humio-humio-core-http -n logging -o go-template --template='http://{{(index .status.loadBalancer.ingress 0 ).ip}}:8080'

If you're running in Minikube, run this command instead:

$ minikube service humio-humio-core-http -n logging --url

If humio-core.authenticationMethod is set to single-user (default), then you need to supply a username and password when logging in. The default username is developer and the password can be retrieved from the command

$ kubectl get secret developer-user-password -n logging -o=template --template={{.data.password}} | base64 -D

The base64 command may vary depending on OS and distribution.

For a full list of customizations, reference the Helm chart.

Upgrading with Kubernetes

To update using a non-rolling strategy, you'll have to delete the LogScale pods temporarily and then bring them back with the new version.

  1. Update the values.yaml with the new version

  2. Delete the pods by running:

    $ kubectl delete statefulset humio-humio-core -n logging
  3. Re-create the statefulset/pods with the new version by running:

    $ helm upgrade --values values.yaml humio humio/humio-helm-charts

To update using a rolling strategy, you can rely on the statefulset to deploy the changes.


If you decide to uninstall Helm Chart, you can do so by executing the following from the command-line:

$ helm delete --purge humio
$ kubectl delete namespace logging --cascade=true


Uninstalling like this will destroy all LogScale data.