Log LogScale to LogScale

When using a LogScale cluster in production, LogScale highly recommends sending LogScale internal logs to another cluster. This way, if you experience any problems with your production cluster, you are still able to determine what went wrong. This guide explains how to ship LogScale Internal Logging to another LogScale cluster.

You can use the humio/insights package to monitor a LogScale cluster. It comes with dashboards and saved queries that can be useful when debugging problems with LogScale.

Preparation

Assuming you have another LogScale cluster ready to receive your production LogScale clusters' logs, perform the following steps:

  1. Create a repository on your LogScale monitoring cluster. This will be where you ship LogScale internal logs.

  2. Identify the URL of where you are sending the logs. Depending on the method and log shipper that you choose, the exact URL may be different. See LogScale URLs & Endpoints for more details on the endpoints you can use.

  3. Install the humio/insights package on the newly created repository. This includes all of the dashboards, queries and parsers used to monitor the other LogScale cluster.

  4. Now create an ingest token and connect it to the parser named humio. This is part of the humio/insights package once installed.

  5. Open the appropriate ports on your firewall and hosts to allow communication with the remote LogScale cluster. For more information on the URL to use, see the notes below and the LogScale URLs & Endpoints page. In general, this will be:

    • Port 443 when using Vector or the Falcon Falcon LogScale Collector

    • Port 9200 when using Filebeat or a log shipper that makes use fo the Elastic bulk ingest endpoint

At this point, your system is prepared. Next, you will configure a log shipper to send LogScale logs.

Configure a log shipper

There are a few steps in particular necessary to configure a log shipper to send LogScale logs to another LogScale system.

Falcon LogScale Collector

LogScale recommends shipping LogScale logs using the Falcon LogScale Collector. To install it, see the Falcon LogScale Collector documentation.

After you have it installed, you'll need to edit your configuration file to look like this:

yaml
sources:
  file:
    type: file
    sink: humio
    include:
      - $LOGSCALE_LOGS_DIR/*.log
    exclude:
      # humio-audit.log is included in humio-debug.log
      - $LOGSCALE_LOGS_DIR/humio-audit.log
    multiLineBeginsWith: ^[0-9]{4}-[0-9]{2}-[0-9]{2}

sinks:
  humio:
    type: humio
    token: $INGEST_TOKEN
    url: $YOUR_LOGSCALE_URL

In the above configuration you need to replace the following:

  • $LOGSCALE_LOGS_DIR — the path to directory containing LogScale internal logs.

    Note

    Globbing has been used to specify which files to collect. In this example, *.log will capture all files in the LogScale log directory with the extension .log.

  • $YOUR_LOGSCALE_URL — the URL of your LogScale cluster being used for monitoring. You do not need to specify the full path, but will need to use the full hostname and port as appropriate. See LogScale URLs & Endpoints

  • $INGEST_TOKEN — The ingest token for the repository on the cluster we are going to be using to monitor our LogScale cluster.

Vector

Vector is a lightweight agent that may be used to send logs. It has built-in support for shipping logs to LogScale via the humio_logs sink.

To use Vector, you'll have to install it on all LogScale nodes within the cluster you want to monitor. See the Vector documentation on how to install vector.

After you've finished installing it, edit the vector.toml configuration file to look like the following:

ini
[sources.logs]
type = "file"
include = ["$LOGSCALE_LOGS_DIR/humio*.log"]

[sources.logs.multiline]
start_pattern = "^[0-9]{4}-[0-9]{2}-[0-9]{2}"
mode = "halt_before"
condition_pattern = "^[0-9]{4}-[0-9]{2}-[0-9]{2}"
timeout_ms = 2000

# Humio Sink: https://vector.dev/docs/reference/sinks/humio_logs/
[sinks.humio_cluster]
type = "humio_logs"
inputs = ["logs"]
compression = "gzip"
host = "$YOUR_LOGSCALE_URL"
token = "$INGEST_TOKEN"
encoding.codec = "text"

In the above configuration example, you'll need to replace the following placeholders:

  • $LOGSCALE_LOGS_DIR — needs to be replaced with the path to the directory containing LogScale internal logs.

    Note

    Globbing has been used to specify which files to collect. In this example, *.log will capture all files in the LogScale log directory with the extension .log.

  • Replace $YOUR_LOGSCALE_URL with the URL of your LogScale cluster that will be used for monitoring. See LogScale URLs & Endpoints

  • $INGEST_TOKEN — the ingest token from the repository on the cluster you'll use to monitor your LogScale cluster.

Once you've made those changes to the configuration file, start Vector and then check the repository for LogScale internal logs.

Filebeat

You can also send LogScale internal logs via Filebeat. To do so, install Filebeat on all LogScale nodes within the cluster you are going to monitor.

After you've done so, edit your filebeat.yml configuration file to like the example below:

yaml
filebeat.inputs:
- paths:
  - $LOGSCALE_LOGS_DIR/humio-*.log
  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
  multiline.negate: true
  multiline.match: after

 queue.mem:
   events: 8000
   flush.min_events: 1000
   flush.timeout: 1s

 output:
   elasticsearch:
     hosts: ["$YOUR_LOGSCALE_URL"]
     password: "$INGEST_TOKEN"
     compression_level: 5
     bulk_max_size: 200

In the above configuration example, you'll need to replace the following placeholders:

  • $LOGSCALE_LOGS_DIR — the path to the directory containing LogScale internal logs.

    Note

    Globbing has been used to specify which files to collect. In this example, *.log will capture all files in the LogScale log directory with the extension .log.

  • $YOUR_LOGSCALE_URL — the URL of your LogScale cluster being used for monitoring.

    For Filebeat, use the Elastic Bulk Endpoint, for example cloud.humio.com:9200. See LogScale URLs & Endpoints

  • $INGEST_TOKEN — the ingest token from the repository on the cluster you'll use to monitor your LogScale cluster.

Once you've made those changes to the configuration file, start Filebeat and then check the repository to see if LogScale internal logs are being received.

Send LogScale logs to LogScale Cloud

Warning

The humio-debug.log can contain sensitive information. It contains logs of things like: E-mails of your LogScale users, queries, names of repositories, views and parsers, IP addresses and access logs from your LogScale nodes. It does not log any of your ingested events. Be aware of this before shipping this log file into LogScale Cloud.

Continuous streaming of LogScale logs to LogScale Cloud

To assist in troubleshooting your self-hosted instance of LogScale, LogScale recommends you stream your LogScale logs into a LogScale Cloud instance. This allows LogScale Support to immediately investigate your cluster logs when you submit any issues to our helpdesk, expediting the investigation. If you did not set this up during onboarding, contact LogScale Support, and they will get an instance configured for you.

Important

When sending self-hosted logs to LogScale Cloud only Support is able to assist you. By sending logs, you are granting permission for CrowdStrike employees to access this data, but only to assist in troubleshooting an explicit issue. CrowdStrike will not monitor this data without cause, and will only access it related to troubleshooting.

These logs will only be accessible by the Support team, and do not replace your own cluster monitoring practices.

Configure continuous streaming of LogScale logs

Support will setup a repository for you to ship to, and share an ingest token and URL for you to use when shipping logs. You will then need to configure a shipper following the instructions.

The $YOUR_LOGSCALE_URL value will depend on your region. Support will provide you the URL to use. It will generally be either https://cloud.humio.com:443/api/v1/ingest/elastic-bulk or https://cloud.us.humio.com/api/v1/ingest/elastic-bulk.

Log retention

By default, Support sets a 30 day time limit on the repository to provide adequate amount of time for us to assist and troubleshoot. After 30 days, the data will be removed following the platform's removal process.

You can request a shorter retention period set on your data, if desired. This will limit the timeframe that Support can investigate any issues.

One-time shipping of LogScale logs to LogScale Cloud

If you are not able to ship logs to LogScale Cloud continuously, Support will need to receive logs from your system to troubleshoot any issues you raise in a Support case.

To receive these logs, Support will setup a repository and provide you the necessary details to ship logs into the repository. Following the log shipper configuration instructions, set the file to your static file containing the excerpted logs.

If you are sending data that is more than 30 days old, make sure Support is aware so that the data will be retained on ingest.

Note

If the logs needed to troubleshoot your issue are less than 10 GB, Support can instead provide you a secure file upload on CrowdStrike's file service (currently box.com). Support will then ingest these logs into a repository built for this purpose, and retain them no longer than 30 days from ingestion.