In the Ingest Dashboard, from the Humio Insights Package, you can see plenty of information regarding the data you’re sending to your Humio cluster. This documentation page explains on how each of the widgets can be useful.
This widget shows how much Ingest each Humio node is receiving in bytes per day.
The distribution of ingest a node receives is usually dictated by the number of Digest Partitions configured in the Cluster Administration page per host. If a node is receiving too little or too much ingest compared to other nodes, you may want to Re-configure the Digest Partitions so they are distributed evenly.
This widget shows how much Ingest each Humio repository is receiving in bytes per day.
This can be useful for monitoring from where the source of your ingest is coming. You may need this when you need to reduce ingest per day for license reasons. To reduce the load on the cluster, you can stop shipping data to particular repositories that have a high ingest or Block Ingest via the Humio UI in the repository settings.
This shows you how much CPU each Humio node is utilizing. Within a cluster, if each Humio node has the same specifications, and digest and storage partitions are evenly distributed, you would expect each Humio node to have around the same CPU usage.
If some nodes are experiencing particularly high usage, this could indicate that something is wrong with Humio or your cluster setup.
This a very important metric in Humio and can indicate slowness in the cluster. This timechart shows the average and median of the ingest latency metric. Ingest latency is defined as the time taken from an event being inserted into the ingest queue to then being digested—before being parsed—updating live queries and adding the event to blocks ready for segment files
Keeping this value ideally less than 10 seconds per node is a sign that the cluster is healthy.
Continuous increases in latency on one or more nodes can suggest problems. This is usually because Humio’s not digesting as fast as it’s ingesting. It could mean we are sending too much data than the capabilities of the resources or the resources are being used else where.
Humio has a threshold built in that will start rejecting events from log shippers if Ingest Latency reaches a certain limit. See reference page on the
MAX_INGEST_DELAY_SECONDS configuration variable.
This is similar to the above, except that this measures latency per Digest Partitions.
This timechart will show the number of changes to the set of active digest nodes triggered by digest coordination.
For a healthy system, this is close to zero, except when an administrator alters the desired digest partition scheme via the Re-configure the Digest Partitions.
This is a timechart of which Humio nodes have been shutdown in the last 24 hours.
If a node shutdown is unexpected, there may be ERROR logs in the
humio repository explaining them. Running a query in the
humio repository for logs around that particular host before it shutdown may also explain the problem. Other widgets in the Humio Insights App also might show something wrong with that particular node. Be sure to check the Errors Dashboard.
This timechart shows which Humio nodes have cancelled their live queries due to Ingest Latency on that node. Due to the way live queries work, they are analysed on ingest before they get stored as a segment files. This means that if there is significant ingest delay, these queries won’t be accurate or up-to-date. This is especially important for alerts.
There is an environment variable that triggers this behaviour called,
This timechart shows the number of CPU ticks used on specifically digest work and live query work.
This timechart shows the total time spent for events in millis per second within a parser across all Humio hosts and parsers.
This table shows you the most used parsers in your Humio cluster. For each parser, it will tell the timeInParser, eventsPerSecond and timePerEvent.
The timeInParser is the total time spent parsing events in ms for that parser in the last hour.
The timePerEvent is the average time to each event in the last hour. This is very important as this may tell you if you have an inefficient parser. Anything that is averaging greater than 1 millisecond per event should be investigated. This can usually be the cause of Ingest Latency since digesting new data can be slowed down entirely if a lot of work from the Humio nodes is spent parsing data.
A timechart of the Humio Metric named
data-ingester-errors. It shows the errors per second for each repository where there was an error parsing the event. To investigate this, you can run a query in the repository affected by the errors that looks like this:
@error=true | groupby(@error_msg)
This will show you all of the ingest error messages. It should give you an indication as to what went wrong.
This timechart shows the datasources that have increased or decreased its number of autoshards. What this means is that if there is a Humio datasource that’s receiving a high volume of ingest, Humio needs to allocate more resources to digest that volume. It does this by allocating autoshards. This is essentially more CPU cores to help digest that data.
This widget can be useful to identify the datasources that are high volume and may potentially begin to struggle with it’s ingest load.
Following from the above, this timechart shows which of those particular high-volume datasources that are now hitting the Humio max limit of auto-shards.
Humio has a configurable value for the maximum amount of auto-shards that can be applied to a datasource with the default being 16. This is configurable via the AUTOSHARDING_MAX environment variable. If you have a Humio cluster that’s using multiple terabytes day and you have datasources hitting this limit, you might increase the AUTOSHARDING_MAX variable—but not over 128. Auto-sharding is only a temporary solution, though. For a more permanent solution, you should attempt to configure proper Tagging on the datasource experiencing issues.
Typically, one core on a Humio node should be enough to handle 256GB per day of Ingest since each auto-shard is the equivalent of allocating another core to digesting the data—the other maximum value of auto-shards is the number of cores you have available in your Humio cluster. If you’re hitting the maximum number of auto-shards and that number is equivalent to the total number of cores of the Humio cluster, you may need to allocate more resources to Humio to handle the amount of ingest you are sending.
This table shows the current datasources in the last 24 hours that have required auto-sharding due to high-volume ingest and the maximum amount of auto-shards it reached.
This timechart shows you the top repositories with the most Tagging and how many each repository has.
A Datasource is a set of Events that have the same Tags. Humio divides each repository into more than one Data Source based on the tags.
The maximum amount of datasources that can be applied to a repository is 10000. This is defined by the
MAX_DATASOURCES variable. Humio can run into issues if you exceed this limit. It’s very important to implement Tagging on low cardinality fields to avoid getting into this situation. If you exceed this limit, you will not be able to ingest data into the repository affected.