Log LogScale to LogScale

When running a LogScale cluster in production we highly recommend shipping LogScale's internal logs into another cluster. This is so if you run into any problems with your production cluster you are still able to debug what went wrong.

You can use the Insights Package package to monitor any LogScale cluster. This comes with dashboards and saved queries that can be useful into debugging what went wrong with LogScale.

This is a guide on how to ship LogScale's Internal Logging to another LogScale cluster.

Preparation

This guide assumes you have another LogScale cluster ready and setup to receive another LogScale clusters' logs.

  1. Create a repository on your LogScale monitoring cluster. This will be where we will ship LogScale's internal logs.

  2. Install the Insights Package package package on the just created repository. This will include all the dashboards, queries and parsers used to setup and monitor the other LogScale cluster.

  3. Create an ingest token and connect it to the parser named humio. This comes as part of the humio/insights package once installed.

  4. Configure a log shipper to send LogScale's logs. See below how to do this.

Configuring a Log Shipper to Send LogScale Logs

Vector

We recommend sending logs using Vector. It is lightweight agent with built-in support for shipping logs to LogScale via the humio_logs sink.

  1. Install Vector on all LogScale nodes within the cluster we are going to monitor. See the Vector documentation on how to install vector.

  2. Edit your vector.toml configuration file to the below:

ini files
[sources.logs]
type = "file"
include = ["${HUMIO_LOGS_DIR}/humio*.log"]

[sources.logs.multiline]
start_pattern = "^[0-9]{4}-[0-9]{2}-[0-9]{2}"
mode = "halt_before"
condition_pattern = "^[0-9]{4}-[0-9]{2}-[0-9]{2}"
timeout_ms = 2000

# Humio Sink: https://vector.dev/docs/reference/sinks/humio_logs/
[sinks.humio_cluster]
type = "humio_logs"
inputs = ["logs"]
compression = "gzip"
host = "${HUMIO_URL}"
token = "${INGEST_TOKEN}"
encoding.codec = "text"

In the above configuration you need to replace the following:

  • ${HUMIO_LOGS_DIR} which will be the path to directory containing LogScale's internal logs. Note how globbing (*) used to specify which files to collect. Example path can look like /data/humio-data/logs.

  • $YOUR_LOGSCALE_URL with the URL of your LogScale cluster being used for monitoring. See Endpoints

  • ${INGEST_TOKEN} The ingest token from the repository on the cluster we are going to be using to monitor our LogScale cluster.

  1. Start Vector and check the repository for LogScale internal logs.

Filebeat

We also support sending LogScale's internal logs via Filebeat. Follow these steps to setup Filebeat to ship LogScale's internal logs to another LogScale cluster.

  1. Install Filebeat on all LogScale nodes within the cluster we are going to monitor.

  2. Edit your filebeat.yml configuration file to the below:

yaml
filebeat.inputs:
- paths:
  - ${HUMIO_LOGS_DIR}/humio-*.log
  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
  multiline.negate: true
  multiline.match: after

 queue.mem:
   events: 8000
   flush.min_events: 1000
   flush.timeout: 1s

 output:
   elasticsearch:
     hosts: ["${HUMIO_URL}"]
     password: "$INGEST_TOKEN"
     compression_level: 5
     bulk_max_size: 200

In the above configuration you need to replace the following:

  • ${HUMIO_LOGS_DIR} which will be the path to directory containing LogScale's internal logs. Note how globbing (*) is used to specify which files to collect. Example path can look like /data/humio-data/logs.

  • $YOUR_LOGSCALE_URL with the URL of your LogScale cluster being used for monitoring. See Endpoints

  • ${INGEST_TOKEN} The ingest token from the repository on the cluster we are going to be using to monitor our LogScale cluster.

  1. Start Filebeat and check the repository to see if logs have been received.

Sending LogScale Logs to LogScale Cloud

To assist in monitoring your on-premise instance of LogScale, it's possible to ship LogScale's logs into LogScale Cloud. This is convenient in that you won't have to run and maintain another cluster. This also helps to share your internal logs with LogScale Support.

When getting logs shipped into LogScale Cloud you should already be in touch with LogScale Support. If it is agreed that we can get your logs setup into LogScale Cloud then this is what you should have setup.

Important

Sending on-prem logs to LogScale Cloud is only for LogScale to support troubleshooting. By sending logs, you are granting permission for LogScale employees to access this data only to assist in troubleshooting an explicit issue. We will not monitor this data without cause, and we will access if it will assist us in troubleshooting.

Prerequisites

There are a few things you'll need:

  • LogScale Cloud account;

  • A repository preferably with the format onprem_$orgName_debug;

  • The humio/insights package installed on your repository.

Contact support if you need a repository created.

Configure the Log Shippers

For both Filebeat and Vector you just need to ensure the $YOUR_LOGSCALE_URL is set to https://cloud.humio.com for EU Cloud or https://cloud.us.humio.com for US Cloud depending on where your LogScale Cloud account is.

Warning

The humio-debug.log can contain sensitive information. It contains logs of things like: E-mails of your LogScale users, queries, names of repositories, views and parsers, IP addresses and access logs from your LogScale nodes. It does not log any of your ingested events. Please ensure you are aware of this before shipping this log file into LogScale Cloud.

Removing Debug Logs from LogScale Cloud

By default, Support sets a 30 day time limit on the repository to provide adequate amount of time for us to assist and troubleshoot. After 30 days, the data will be included in the removal process.

You can request to have these logs removed prior to the default 30 days if Support troubleshooting is no longer needed. In both of the cases described you must stop ingest from the log forwarder in order to remove all data.