LogScale Installation Preparation

There are a few things to do to prepare to install LogScale. Scroll down this page to see the headings for each section.

Hardware Requirements

Hardware requirements depend on how much data you will be ingesting, and how many concurrent searches you will be running.

Scaling your Environment

LogScale was made to scale, and scales very well within the nodes in a cluster. Running a cluster of three or more LogScale nodes provides higher capacity in terms of both ingest and search performance, and also allows high availability by replicating data to more than one node.

If you want to run a clustered node please review Manual Cluster Deployment.

Estimating Resources

Here are a few guidelines to help you determine what hardware you'll need.

  • Assume data compresses 9x on ingest. Test your installation; better compression means better performance.

  • You need to be able to hold 48 hours of compressed data in 80% of your RAM.

  • You want enough hyper-threads/vCPUs (each giving you 1GB/s search) to be able to search 24 hours of data in less than 10 seconds.

  • You need disk space to hold your compressed data. Never fill your disk more than 80%.

For sample deployment conigurations within bare metal, AWS and GCP environments, see Instance Sizing.

For detailed information on how to choose hardware, and how to size your LogScale installation, see the Instance Sizing.

Configuration Options

Please refer to the Configuration Parameters documentation page.

Enable Authentication

For production deployments, authentication should be enabled. If authentication is not configured, LogScale runs in NO_AUTH mode, meaning that there are no access restrictions at all — anyone with access to the system can do anything. Refer to Authentication Configuration for different login options.

Incidentally, if you only want to experiment with LogScale, you can probably skip this documentation page.

Encryption for Local Storage

LogScale does not encrypt data on local drives as part of the application. Instead, administrators are encouraged to use relevant tooling for their operating system to fully encrypt the file systems holding LogScale and Kafka data.

A tool like cryptsetup can be used to encrypt an entire file system. Full Disk Encryption can be used to encrypt files at the hardware level.

Disable Swap Memory on Bare-metal

When installing in a bare metal environment, disable the use of swap memory as it gives a false impression to Java and LogScale that more memory than physical memory is available.

To disable swap memory, remove any swap entries within the /etc/fstab.

Increase Open File Limit

For production usage, LogScale needs to be able to keep a lot of files open for sockets and actual files from the file system. The default limits on Unix systems are typically too low for any significant amount of data and concurrent users.

You can verify the actual limits for the process using:

shell
$ PID=`ps -ef | grep java | grep humio-assembly | head -n 1 | awk '{print $2}'`
$ cat /proc/$PID/limits | grep 'Max open files'

The minimum required settings depend on the number of open network connections and datasources. There is no harm in setting these limits high for the LogScale process. A value of at least 8192 is recommended.

You can do that using a simple text editor to create a file named 99-humio-limits.conf in the /etc/security/limits.d/ sub-directory. Copy these lines into that file:

ini
# Raise limits for files:
humio soft nofile 250000
humio hard nofile 250000

Create another file with a text editor, this time in the /etc/pam.d/ sub-directory, and name it common-session. Copy these lines into it:

ini
# Apply limits:
session required pam_limits.so

These settings apply to the next LogScale user login, not to any running processes.

If you run LogScale using Docker, then you can raise the limit using the --ulimit="nofile=8192:8192" option on the docker run command.

Separate Disk for Kafka Data

For production usage, you should ensure Kafka's data volume is on a separate disk/volume from the other LogScale services. This is because it's quite easy for Kafka to fill the disk it's using if LogScale ingestion is slowed down for any reason. If it fills its disk, having it on a separate disk/volume than the other services will prevent them from crashing along with Kafka and will make recovery easier. If Kafka is running as separate servers or containers, you will likely be covered already, so this is primarily for situations where you're running the all-in-one Docker image we supply.

We also highly recommend setting up your own disk usage monitoring to alert you when disks get greater than 80% full, so you can take corrective action before the disk fills completely.

Check noexec on /tmp

Check the filesystem options on /tmp. LogScale makes use of the Facebook Zstandard real-time compression algorithm, which requires the ability to execute files directly from the configured temporary directory.

The options for the filesystem can be checked using mount:

shell
$ mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,nosuid,noexec,relatime,size=1967912k,nr_inodes=491978,mode=755,inode64)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=399508k,mode=755,inode64)
/dev/sda5 on / type ext4 (rw,relatime,errors=remount-ro)
tmpfs on /tmp type tmpfs (rw,nosuid,nodev,noexec,seclabel)

You can temporarily remove noexec using mount to 'remount' the directory:

shell
$ mount -oremount,exec /tmp

To permanently remove the noexec flag, update /etc/fstab to remove the flag from the options:

shell
tmpfs /tmp tmpfs mode=1777,nosuid,nodev 0 0