Best Practice: Installing Humio: Understanding Humio Clusters

Last Updated: 2022-07-21

Before installing a Humio Cluster, it is important to understand the components of the cluster and how they work together to support your Humio deployment.

AWS clusters in 3 availability zones. Traffic is distributed using a load balancer

Figure 276. AWS clusters in 3 availability zones. Traffic is distributed using a load balancer


A more detailed description of what is running in each cluster

Figure 277. A more detailed description of what is running in each cluster


The remainder of this section provides background on the different components and requirements of your Humio environment and deployment.

For guides on installing a Humio cluster using AWS, Kubernetes and Ansible, see the following KB articles:

Humio Basics

If you are interested in setting up a cluster using our best practices then we recommend using our installation and sizing guides for how to do this on AWS on a Kubernetes cluster or on bare-metal using Ansible scripts: ???. Using these guides most of the settings below will be adjusted as part of the installation but if for some reason you need to install from scratch then the below parameters are important for everything to run smoothly.

Scaling your Environment

Humio was made to scale, and scales very well within the nodes in a cluster. Running a cluster of three or more Humio nodes provides higher capacity in terms of both ingest and search performance, and also allows high availability by replicating data to more than one node.

If you want to run a clustered node please follow one of the guides for installing using Kubernetes or Ansible: Links here

Estimating Resources

Here are a few guidelines to help you determine what hardware you ll need.

  • Assume data compresses 9x on ingest. Test your installation; better compression means better performance.

  • You need to be able to hold 48 hours of compressed data in 80% of your RAM.

  • You want enough hyper-threads/vCPUs (each giving you 1GB/s search) to be able to search 24 hours of data in less than 10 seconds.

  • You need disk space to hold your compressed data. Never fill your disk more than 80%.

For information on how to choose hardware, and how to size your Humio installation, see the Humio Instance Sizing page.

You can also use the cluster installation guides and use the sizing suggestions based on our experience with running Humio.

Example Setup

Your machine has 64 GB of RAM, 8 hyper-threads (4 cores) and 1 TB of storage. Your machine can hold 460 GB of ingest data compressed in RAM and process 8 GB/s. In this case, it means 10 seconds worth of query time will run through 80 GB of data. So this machine fits an 80 GB/day ingest, with +5 days data available for fast querying. You can store 7.2 TB of data before your disk is 80% full, corresponding to 90 days at 80 GB/day ingest rate.

This example assumes that all data has the same retention settings. But you can configure Humio to automatically delete some events before others, allowing some data to be kept for several years while other data gets deleted after one week, for example.

Configuration Options

Please refer to Environment Variables

Authentication

For production deployments, you want to set up authentication. If authentication is not configured, Humio runs in NO_AUTH mode, meaning that there are no access restrictions at all anyone with access to the system can do anything. Refer to Authentication Configuration for different login options.

Incidentally, if you only want to experiment with Humio, you can probably skip this page.

Increase Open File Limit

For production usage, Humio needs to be able to keep a lot of files open for sockets and actual files from the file system. The default limits on Unix systems are typically too low for any significant amount of data and concurrent users.

You can verify the actual limits for the process using:

shell
PID=`ps -ef | grep java | grep humio-assembly | head -n 1 | awk '{print $2}'`
cat /proc/$PID/limits | grep 'Max open files'

The minimum required settings depend on the number of open network connections and datasources. There is no harm in setting these limits high for the Humio process. A value of at least 8192 is recommended.

You can do that using a simple text editor to create a file named 99-humio-limits.conf in the /etc/security/limits.d/ sub-directory. Copy these lines into that file:

shell
# Raise limits for files:
humio soft nofile 250000humio hard nofile 250000

Create another file with a text editor, this time in the /etc/pam.d/ sub-directory, and name it common-session. Copy these lines into it:

shell
# Apply limits:
session required pam_limits.so

These settings apply to the next Humio user login, not to any running processes.

If you run Humio using Docker, then you can raise the limit using the ulimit= nofile=8192:8192 option on the docker run command.

Kafka considerations

For production usage, you should ensure Kafka s data volume is on a separate disk/volume from the other Humio services. This is because it's quite easy for Kafka to fill the disk it's using if Humio ingestion is slowed down for any reason. If it fills its disk, having it on a separate disk/volume than the other services will prevent them from crashing along with Kafka and will make recovery easier. If Kafka is running as separate servers or containers, you will likely be covered already, so this is primarily for situations where you re running the all-in-one Docker image we supply.

We also highly recommend setting up your own disk usage monitoring to alert you when disks get greater than 80% full, so you can take corrective action before the disk fills completely.

Configure /tmp directory for compression

Check the filesystem options on /tmp. Humio makes use of the Facebook Zstandard real-time compression algorithm, which requires the ability to execute files directly from the configured temporary directory.

The options for the filesystem can be checked using mount:

shell
$ mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,nosuid,noexec,relatime,size=1967912k,nr_inodes=491978,mode=755,inode64)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=399508k,mode=755,inode64)
/dev/sda5 on / type ext4 (rw,relatime,errors=remount-ro)
tmpfs on /tmp type tmpfs (rw,nosuid,nodev,noexec,seclabel)

You can temporarily remove noexec using mount to remount the directory:

shell
mount -oremount,exec /tmp

To permanently remove the noexec flag, update /etc/fstab to remove the flag from the options:

shell
tmpfs /tmp tmpfs mode=1777,nosuid,nodev 0 0