Manual Cluster Deployment
This section describes how to install LogScale configured as a distributed system across multiple machines. Running a distributed LogScale setup requires a Kafka cluster. You can set up such a cluster using our Docker image, or you can install Kafka using some other method.
Recommended Deployment Diagram
For a cluster deployment you should:
Deploy a minimum three LogScale nodes
Deploy a minimum three Kafka nodes
Use a load balancer to interface between nodes and clients
Figure 3. Deployment Diagram
Running Kafka Docker Images
Available: LogScale & ZooKeeper v1.108.0
The requirement for LogScale to use ZooKeeper was removed in LogScale 1.108.0. ZooKeeper may still be required by Kafka. Please refer to your chosen Kafka deployment documentation for details.
The recommended default is to run three instances of Kafka. These nodes should not be the same as the LogScale nodes. We recommend that you use a standard Kafka docker image, for example Confluent Kafka Docker.
When configuring Kafka, each Kafka host must have it's own unique ID. For example, in a three node cluster:
Host | Kafka ID |
---|---|
kafka1 | 1 |
kafka2 | 2 |
kafka3 | 3 |
How this is configured depends on your chosen Kafka image. When using Confluent you can run in two modes:
In Kraft mode, Kafka does not use ZooKeeper but uses the
node.id
parameter, orKAFKA_NODE_ID
variable in Docker to set the node ID.In addition, in KRaft mode you must configure a unique a cluster ID.
In ZooKeeper mode, Kafka uses ZooKeeper and the
broker.id
parameter, orKAFKA_BROKER_ID
variable in Docker is used to set the unique broker ID.
Ensure that the listener is something the LogScale instances can reach over the network. The default Kafka ports should be open and accessible between docker images. If in doubt, please refer to the Kafka documentation.
For example, to run the Confluent Docker image in KRaft mode across three hosts:
docker run -d \
--name=kafka1 \
-h kafka1 \
-p 9101:9101 \
-p 29092:29092 \
-e KAFKA_NODE_ID=1 \
-e KAFKA_LISTENER_SECURITY_PROTOCOL_MAP='CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT' \
-e KAFKA_ADVERTISED_LISTENERS='PLAINTEXT://kafka1:29092,PLAINTEXT_HOST://localhost:9092' \
-e KAFKA_PROCESS_ROLES='broker,controller' \
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
-e KAFKA_CONTROLLER_QUORUM_VOTERS='1@kafka1:29092,2@kafka2:29094,3@kafka2:29093' \
-e KAFKA_LISTENERS='PLAINTEXT://kafka1:29092,CONTROLLER://kafka1:29093,PLAINTEXT_HOST://0.0.0.0:9092' \
-e KAFKA_INTER_BROKER_LISTENER_NAME='PLAINTEXT' \
-e KAFKA_CONTROLLER_LISTENER_NAMES='CONTROLLER' \
-e CLUSTER_ID='SNO4Bhs6QYuk4lQUougG6w' \
confluentinc/cp-server:latest
docker run -d \
--name=kafka2 \
-h kafka2 \
-p 9092:9092 \
-p 29094:29094 \
-e KAFKA_NODE_ID=2 \
-e KAFKA_LISTENER_SECURITY_PROTOCOL_MAP='CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT' \
-e KAFKA_ADVERTISED_LISTENERS='PLAINTEXT://kafka2:29094,PLAINTEXT_HOST://localhost:9093' \
-e KAFKA_PROCESS_ROLES='broker,controller' \
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
-e KAFKA_CONTROLLER_QUORUM_VOTERS='1@kafka1:29092,2@kafka2:29094,3@kafka2:29093' \
-e KAFKA_LISTENERS='PLAINTEXT://kafka2:29094,CONTROLLER://kafka2:29095,PLAINTEXT_HOST://0.0.0.0:9093' \
-e KAFKA_INTER_BROKER_LISTENER_NAME='PLAINTEXT' \
-e KAFKA_CONTROLLER_LISTENER_NAMES='CONTROLLER' \
-e CLUSTER_ID='SNO4Bhs6QYuk4lQUougG6w' \
confluentinc/cp-server:latest
docker run -d \
--name=kafka3 \
-h kafka3 \
-p 9093:9093 \
-p 29096:29096 \
-e KAFKA_NODE_ID=3 \
-e KAFKA_LISTENER_SECURITY_PROTOCOL_MAP='CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT' \
-e KAFKA_ADVERTISED_LISTENERS='PLAINTEXT://kafka3:29096,PLAINTEXT_HOST://localhost:9093' \
-e KAFKA_PROCESS_ROLES='broker,controller' \
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
-e KAFKA_CONTROLLER_QUORUM_VOTERS='1@kafka1:29092,2@kafka2:29094,3@kafka2:29093' \
-e KAFKA_LISTENERS='PLAINTEXT://kafka3:29096,CONTROLLER://kafka3:29097,PLAINTEXT_HOST://0.0.0.0:9092' \
-e KAFKA_INTER_BROKER_LISTENER_NAME='PLAINTEXT' \
-e KAFKA_CONTROLLER_LISTENER_NAMES='CONTROLLER' \
-e CLUSTER_ID='SNO4Bhs6QYuk4lQUougG6w' \
confluentinc/cp-server:latest
Start the Docker images on each host, mounting the configuration files and data locations created in previous steps
For more information, See Confluent Documentation for a guide on creating a multi-node cluster.
To verify that Kafka is running:
Use nc to get the status of each ZooKeeper instance. The following must respond with either Leader or Follower for all instances
shell$
echo stat | nc 192.168.1.1 2181 | grep '^Mode: '
Optionally, use your favorite Kafka tools to validate the state of your Kafka cluster. You could list the topics using this, expecting to get an empty list since this is a fresh install of Kafka
shell$
kafka-topics.sh --zookeeper localhost:2181 --list
LogScale Docker Core Container
LogScale is distributed as a core Docker image that contains only
LogScale; use the humio/humio-core
edition for
distributed deployments. For developer deployments using a non-core
Docker image that includes Kafka and ZooKeeper, see
Docker Deployment
Create an empty file on the host machine to store the LogScale
configuration. For example, humio.conf
.
You can use this file to pass on JVM arguments to the LogScale Java process.
Enter and then edit the following settings into the configuration file
# Make LogScale write a backup of the data files:
# Backup files are written to mount point "/backup".
#BACKUP_NAME=my-backup-name
#BACKUP_KEY=my-secret-key-used-for-encryption
# ID to choose for this server when starting up the first time.
# Leave commented out to autoselect the next available ID.
# If set, the server refuses to run unless the ID matches the state in data.
# If set, must be a (small) positive integer.
#BOOTSTRAP_HOST_ID=1
# The URL that other hosts can use to reach this server. Required.
# Examples: https://humio01.example.com or http://humio01:8080
# Security: We recommend using a TLS endpoint.
# If all servers in the LogScale cluster share a closed LAN, using those endpoints may be okay.
EXTERNAL_URL=https://humio01.example.com
# Kafka bootstrap servers list. Used as `bootstrap.servers` towards kafka.
# should be set to a comma separated host:port pairs string.
# Example: `my-kafka01:9092` or `kafkahost01:9092,kafkahost02:9092`
KAFKA_SERVERS=kafka1:9092,kafka2:9092
# Select the TCP port to listen for http.
#HUMIO_PORT=8080
# Select the IP to bind the udp/tcp/http listening sockets to.
# Each listener entity has a listen-configuration. This ENV is used when that is not set.
#HUMIO_SOCKET_BIND=0.0.0.0
# Select the IP to bind the http listening socket to. (Defaults to HUMIO_SOCKET_BIND)
#HUMIO_HTTP_BIND=0.0.0.0
For more information on each of these environment variables, see the Environment Variables reference page.
If you make changes to the settings in your environment file, simply stopping and starting the container will not work. You need to docker rm the container and docker run it again to pick up changes.
Create an empty directory on the host machine to store data for LogScale
$ mkdir /data/humio-data
Pull the latest LogScale image:
$ docker pull humio/humio-core
Run the LogScale Docker image as a container
$ docker run -d --restart always --net=host \
-v /data/logs:/data/logs \
-v /data/humio-data:/data/humio-data \
-v /backup:/backup \
--stop-timeout 300 \
--env-file $PATH_TO_CONFIG_FILE --name humio-core humio/humio-core
Replace /data/humio-data
before the
:
with the path to the humio-data
directory you created on the host machine, and
$PATH_TO_CONFIG_FILE
with the path
of the configuration file you created.
Verify that LogScale is able to start using the configuration provided by looking at the log file. In particular, it should not keep logging problems connecting to Kafka.
$ grep 'LogScale server is now running!' /data/logs/humio_std_out.log
$ grep -i 'kafka' /data/logs/humio_std_out.log
LogScale is now running. Navigate to http://localhost:8080
to
view the LogScale Web UI.
In the above example, we started the LogScale container with full access to the network of the host machine. In a production environment, you should restrict this access by using a firewall, or adjusting the Docker network configuration.
Starting LogScale as a Service
There are different ways of starting the Docker container as a service. In the above example, we used Docker's restart policies. LogScale can be started using a process manager.
If you receive this warning after starting up the LogScale service, please ignore it. This does not affect the LogScale service.
LogScale server is now running!
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.humio.util.FileUtilsJNA (file:/app/humio/humio-assembly.jar) to field sun.nio.ch.FileChannelImpl.fd
WARNING: Please consider reporting this to the maintainers of com.humio.util.FileUtilsJNA
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release.
Configuring LogScale
Please refer to the Configuration Settings section.
Cluster Management API
Please see the Cluster Management API reference page.
To fully understand the roles of the various components of a LogScale cluster, please reference the Single-Node Setup documentation.
The following sections can help you understand the effects of adding more nodes of the LogScale components to your cluster.
Cluster Operation
Some additional notes on the deployment of LogScale within a cluster and how it affects the different components are outlined below.
ZooKeeper Deployment for Cluster
Available: LogScale & ZooKeeper v1.108.0
The requirement for LogScale to use ZooKeeper was removed in LogScale 1.108.0. ZooKeeper may still be required by Kafka. Please refer to your chosen Kafka deployment documentation for details.
When using ZooKeeper, the following factors apply:
A ZooKeeper cluster can survive losing less than half its nodes. This means that a 3-node ZooKeeper cluster can survive 1 node going offline, a 5-node cluster can survive 2 nodes going offline, and so on. A consequence of this is that you should always have an odd number of ZooKeeper nodes.
Neither LogScale nor Kafka overly stress ZooKeeper, so you are unlikely to see any difference in LogScale's performance from adding more ZooKeeper nodes.
Kafka Deployment for Cluster
Adding more Kafka nodes can alleviate bottlenecks for data ingestion, but will not affect query performance.
More Kafka nodes allows you more resiliency against data loss, in case Kafka hosts go offline. The number of nodes you can lose before data loss occurs will depend on Kafka's configured replication factor. Kafka can survive losing all but one replica. Adding extra replicas will slow down ingest somewhat, as data must be duplicated across Kafka nodes.
When allowing LogScale to manage Kafka topics on a Kafka cluster at or above three nodes, LogScale will replicate the global-events and transientChatter-events topics to three nodes, and will require that two of those nodes are available at all times.
It is often convenient to co-host ZooKeeper and Kafka on the same nodes. You might want to host them on different nodes so you can have a different number of each. Since Kafka does not need as many nodes to be resilient against downtime, it can make sense to have only a few (e.g. 3 or 5) ZooKeepers, but more Kafka nodes.
It is convenient to run Kafka and LogScale on the same nodes for low data volumes. As both services can be demanding for the local IO system, we recommend that LogScale and Kafka do not run on the same nodes once the cluster is scaled up.
Adding more LogScale nodes will increase performance of queries, as the work can be split across more machines. More nodes also allow you to replicate your data, ensuring resiliency against machine breakage. LogScale can survive losing all but one replica. With bucket storage enabled, LogScale can survive losing all nodes, as long as the Kafka cluster does not lose state.