Preparation for Installing LogScale
There are a few things to do to prepare to install LogScale. Scroll down this page to see the headings for each section.
Hardware Requirements
Hardware requirements depend on how much data you will be ingesting, and how many concurrent searches you will be running.
Scaling your Environment
LogScale was made to scale, and scales very well within the nodes in a cluster. Running a cluster of three or more LogScale nodes provides higher capacity in terms of both ingest and search performance, and also allows high availability by replicating data to more than one node.
If you want to run a clustered node please review Cluster Setup.
Estimating Resources
Here are a few guidelines to help you determine what hardware you'll need.
Assume data compresses 9x on ingest. Test your installation; better compression means better performance.
You need to be able to hold 48 hours of compressed data in 80% of your RAM.
You want enough hyper-threads/vCPUs (each giving you 1GB/s search) to be able to search 24 hours of data in less than 10 seconds.
You need disk space to hold your compressed data. Never fill your disk more than 80%.
For information on how to choose hardware, and how to size your LogScale installation, see the Instance Sizing.
Example Setup
Your machine has 64 GB of RAM, 8 hyper-threads (4 cores) and 1 TB of storage. Your machine can hold 460 GB of ingest data compressed in RAM and process 8 GB/s. In this case, it means 10 seconds worth of query time will run through 80 GB of data. So this machine fits an 80 GB/day ingest, with +5 days' data available for fast querying. You can store 7.2 TB of data before your disk is 80% full, corresponding to 90 days at 80 GB/day ingest rate.
This example assumes that all data has the same Data Retention. But you can configure LogScale to automatically delete some events before others, allowing some data to be kept for several years while other data gets deleted after one week, for example.
For more details, refer to our Instance Sizing page.
Configuration Options
Please refer to the Configuration Parameters documentation page.
Enable Authentication
For production deployments, you want to set up authentication. If
authentication is not configured, LogScale runs in
NO_AUTH
mode, meaning that there are no access
restrictions at all — anyone with access to the system can do
anything. Refer to Authentication
Configuration for different login options.
Incidentally, if you only want to experiment with LogScale, you can probably skip this documentation page.
Increase Open File Limit
For production usage, LogScale needs to be able to keep a lot of files open for sockets and actual files from the file system. The default limits on Unix systems are typically too low for any significant amount of data and concurrent users.
You can verify the actual limits for the process using:
PID=`ps -ef | grep java | grep humio-assembly | head -n 1 | awk '{print $2}'`
cat /proc/$PID/limits | grep 'Max open files'
The minimum required settings depend on the number of open network connections and datasources. There is no harm in setting these limits high for the LogScale process. A value of at least 8192 is recommended.
You can do that using a simple text editor to create a file named
99-humio-limits.conf
in the
/etc/security/limits.d/
sub-directory. Copy these
lines into that file:
# Raise limits for files:
humio soft nofile 250000
humio hard nofile 250000
Create another file with a text editor, this time in the
/etc/pam.d/
sub-directory, and name it
common-session
. Copy these lines into it:
# Apply limits:
session required pam_limits.so
These settings apply to the next LogScale user login, not to any running processes.
If you run LogScale using Docker, then you can raise the limit using
the --ulimit="nofile=8192:8192"
option on the
docker run command.
Separate Disk for Kafka Data
For production usage, you should ensure Kafka's data volume is on a separate disk/volume from the other LogScale services. This is because it's quite easy for Kafka to fill the disk it's using if LogScale ingestion is slowed down for any reason. If it fills its disk, having it on a separate disk/volume than the other services will prevent them from crashing along with Kafka and will make recovery easier. If Kafka is running as separate servers or containers, you will likely be covered already, so this is primarily for situations where you're running the all-in-one Docker image we supply.
We also highly recommend setting up your own disk usage monitoring to alert you when disks get greater than 80% full, so you can take corrective action before the disk fills completely.
Check noexec
on /tmp
Check the filesystem options on /tmp
. LogScale
makes use of the Facebook Zstandard real-time compression algorithm,
which requires the ability to execute files directly from the
configured temporary directory.
The options for the filesystem can be checked using mount:
$ mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,nosuid,noexec,relatime,size=1967912k,nr_inodes=491978,mode=755,inode64)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=399508k,mode=755,inode64)
/dev/sda5 on / type ext4 (rw,relatime,errors=remount-ro)
tmpfs on /tmp type tmpfs (rw,nosuid,nodev,noexec,seclabel)
You can temporarily remove noexec
using
mount to 'remount' the directory:
mount -oremount,exec /tmp
To permanently remove the noexec
flag, update
/etc/fstab
to remove the flag from the options:
tmpfs /tmp tmpfs mode=1777,nosuid,nodev 0 0
Google Cloud Platform (GCP)/Google Kubernetes Engine (GKE)
Assumptions:
30 Day Retention NVME
20% Overhead left on NVME
10x Compression
GCS Bucket storage used for longer retention
LogScale does not provide a self-hosted Kubernetes solution for Kafka and Zookeeper
Zookeeper/Kafka clusters are separate from LogScale clusters to avoid resource contention and allow independent management.
X-Small - 1 TB/Day Ingestion
Software | Instances | EC2 Instance Type/vCPU | Memory | Storage | Total Storage |
---|---|---|---|---|---|
LogScale | 3 | n-standard-16 / 16 | 122 GB | NVME 3 TB | 9 TB |
Kafka | 3 | n-standard-8 / 8 | 32 GB | PD-SSD 500 GB | 1.5 TB |
Zookeeper | 3 | Shared with Kafka | Shared with Kafka | PD-SSD 50 GB | 150 GB |
Small - 3 TB/Day Ingestion
Software | Instances | EC2 Instance Type/vCPU | Memory | Storage | Total Storage |
---|---|---|---|---|---|
LogScale | 3 | n2-highmem-16 / 16 | 128 GB | NVME 6 TB | 18 TB |
Kafka | 3 | n-standard-8 / 8 | 32 GB | PD-SSD 500 GB | 1.5 TB |
Zookeeper | 3 | Shared with Kafka | Shared with Kafka | PD-SSD 50 GB | 150 GB |
Medium - 5 TB/Day Ingestion
Software | Instances | EC2 Instance Type/vCPU | Memory | Storage | Total Storage |
---|---|---|---|---|---|
LogScale | 6 | n-standard-32 / 32 | 128 GB | NVME 6 TB (16x375GB) | 36 TB |
Kafka | 6 | n-standard-8 / 8 | 32 GB | PD-SSD 1 TB | 6 TB |
Zookeeper | 3 | Shared with Kafka | Shared with Kafka | PD-SSD 50 GB | 150 GB |
Large - 10 TB/Day Ingestion
Software | Instances | EC2 Instance Type/vCPU | Memory | Storage | Total Storage |
---|---|---|---|---|---|
LogScale | 12 | n-standard-32 / 32 | 128 GB | NVME 6 TB (16x375GB) | 72 TB |
Kafka | 6 | n-standard-8 / 8 | 32 GB | PD-SSD 1 TB | 6 TB |
Zookeeper | 3 | Shared with Kafka | Shared with Kafka | PD-SSD 50 GB | 150 GB |
X-Large - 30 TB/Day Ingestion
Software | Instances | EC2 Instance Type/vCPU | Memory | Storage | Total Storage |
---|---|---|---|---|---|
LogScale | 30 | n-standard-64 / 64 | 256 GB | NVME 7.5 TB (16x375GB) | 225 TB |
Kafka | 9 | n-standard-8 / 8 | 32 GB | PD-SSD 1.5 TB | 13.5 TB |
Zookeeper | 3 | Shared with Kafka | Shared with Kafka | PD-SSD 50 GB | 150 GB |