Docker

Humio is distributed as a Docker image. This means that you can start an instance without a complicated installation procedure. However, if you want to get logs from Docker into Humio, then you should read the Docker Logging page instead.
Install Docker
The first step to install Humio using Docker is to install Docker on
the machine where you want to run Docker with Humio. You can
Download Docker from their
site or by using a package installation program like
yum
or apt-get
.
One you have Docker installed, you'll need to create a Humio configuration file on the host machine. We recommend using the Launcher Script to configure Humio. This will automatically set configuration parameters according to the available infrastructure environment.
Note
Docker only loads the environment file when the container is initially created. If you make changes to the settings in your environment file, restarting the container won't work. You'll need to execute docker rm with the container name, and then execute docker run for the changes to take effect.
Now, make two directories on the host machine: one to store data for Humio in general and one for Kafka data. And then pull the latest Humio image by executing the following at the command-line:
mkdir -p mounts/data mounts/kafka-data
docker pull humio/humio
Separate mount points help isolate Kafka from the other services. Kafka is notorious for consuming large amounts of disk space, so it's important to protect the other services from running out of disk space by using a separate volume in production deployments. Make sure all volumes are being appropriately monitored as well. If your installation does run out of disk space and gets into a bad state, you can find recovery instructions in Kafka switching.
Incidentally, when you want to update Humio, read the Updating Humio documentation page.
Starting Docker with Humio
With everything downloaded and in place, you're ready to run the Humio Docker image as a container. Do this by executing something like the following from the command-line:
docker run -v $HOST_DATA_DIR:/data \
-v $HOST_KAFKA_DATA_DIR:/data/kafka-data \
-v $PATH_TO_READONLY_FILES:/etc/humio:ro \
--net=host \
--name=humio \
--ulimit="nofile=8192:8192" \
--stop-timeout 300 \
--env-file=$PATH_TO_CONFIG_FILE humio/humio
To customise the configuration:
Replace
$HOST_DATA_DIR
with the path to themounts/data
directory for the data on the host machine.Replace
$HOST_KAFKA_DATA_DIR
with the path to themounts/kafka-data
directoryReplace
$PATH_TO_CONFIG_FILE
with the path of the configuration file you createdThe directory
$PATH_TO_READONLY_FILES
provides a place to put files that Humio needs at runtime, such as certificates for SAML authentication.
At this point, Humio should be running. Using a web browser, navigate
to http://localhost:8080
to open
the Humio user interface. However, there are a first of the settings
above that you might adjust further based on how you're using Humio
with Docker.
If you're running the Humio containers with a host that's using
SElinux in enforcing
mode, the
container has to be started with the --privileged
flag set.
In the example above, the Humio container was started with full access
to the network of the host machine (--net=host
). In a
production environment, though, you should restrict this access by
using a firewall, or adjusting the Docker network configuration.
Another possibility is to forward explicit ports: -p
8080:8080
. But then, you need to forward all the ports you
configure Humio to use. By default Humio is only using port 8080.
On a Mac OS machine, there can be problems with using the host network
(i.e., --net=host
). If that happens, use -p
8080:8080
to forward port 8080 on the host network to the
Docker container. Another concern is to allow enough memory for the
virtual machine running Docker on Mac. Open the Docker app, go to
preferences, and specify 4GB.
Running Humio as a System Service
The Docker container can be started as a service using the Docker run reference.
To ensure Humio restarts, add --detach
--restart=always
to the above Docker run:
docker run ... --detach --restart=always