Log LogScale to LogScale

When using a LogScale cluster in production, we highly recommend sending LogScale internal logs to another cluster. This way if you experience any problems with your production cluster, you're still be able to determine what went wrong. This is guide explains how to ship LogScale Internal Logging to another LogScale cluster.

You can use the Insights Package package to monitor a LogScale cluster. It comes with dashboards and saved queries that can be useful when debugging problems with LogScale.

Preparation

Assuming you have another LogScale cluster ready to receive your production LogScale clusters' logs, you'll need to do the following steps:

  • First, create a repository on your LogScale monitoring cluster. This will be where you'll ship LogScale internal logs.

  • Identify the URL of where you are sending the logs. Depending on the method and log shipper that you choose, the exact URL may be different. See LogScale URLs & Endpoints for more details on the endpoints you can use.

  • Next, install the Insights Package package on the newly created repository. This will include all of the dashboards, queries and parsers used to monitor the other LogScale cluster.

  • Now create an ingest token and connect it to the parser named, humio. This is part of the humio/insights package once installed.

  • Open the appropriate ports on your Firewall and hosts to allow communication with the remote LogScale cluster. For more information on the URL to use, see the notes below and the LogScale URLs & Endpoints page. In general this will be:

    • Port 443 when using Vector or the Falcon LogScale Collector

    • Port 9200 when using Filebeat or a log shipper that makes use fo the Elastic bulk ingest endpoint

At this point, your system is prepared. You'll next have to configure a log shipper to send LogScale logs. this is covered in the next section.

Configure a Log Shipper

There are a few steps in particular necessary to configure a log shipper to send LogScale logs to another LogScale system:

LogScale Collector

We recommend shipping LogScale logs using the LogScale Collector. To install it, see the Falcon LogScale Collector documentation.

After you have it installed, you'll need to edit your configuration file to look like this:

yaml
sources:
  file:
    type: file
    sink: humio
    include:
      - $LOGSCALE_LOGS_DIR/*.log
    exclude:
      # humio-audit.log is included in humio-debug.log
      - $LOGSCALE_LOGS_DIR/humio-audit.log
    multiLineBeginsWith: ^[0-9]{4}-[0-9]{2}-[0-9]{2}

sinks:
  humio:
    type: humio
    token: $INGEST_TOKEN
    url: $YOUR_LOGSCALE_URL

In the above configuration you need to replace the following:

  • $LOGSCALE_LOGS_DIR — the path to directory containing LogScale internal logs.

    Note

    Globbing has been used to specify which files to collect. In this example, *.log will capture all files in the LogScale log directory with the extention .log.

  • $YOUR_LOGSCALE_URL — the URL of your LogScale cluster being used for monitoring. You do not need to specify the full path, but will need to use the full hostname and port as appropriate. See LogScale URLs & Endpoints

  • $INGEST_TOKEN — The ingest token for the repository on the cluster we are going to be using to monitor our LogScale cluster.

Vector

Vectoris a lightweight agent that may be used to send logs. It has built-in support for shipping logs to LogScale via the humio_logs sink.

To use Vector, you'll have to install it on all LogScale nodes within the cluster you want to monitor. See the Vector documentation on how to install vector.

After you've finished installing it, edit the vector.toml configuration file to look like the following:

ini
[sources.logs]
type = "file"
include = ["$LOGSCALE_LOGS_DIR/humio*.log"]

[sources.logs.multiline]
start_pattern = "^[0-9]{4}-[0-9]{2}-[0-9]{2}"
mode = "halt_before"
condition_pattern = "^[0-9]{4}-[0-9]{2}-[0-9]{2}"
timeout_ms = 2000

# Humio Sink: https://vector.dev/docs/reference/sinks/humio_logs/
[sinks.humio_cluster]
type = "humio_logs"
inputs = ["logs"]
compression = "gzip"
host = "$YOUR_LOGSCALE_URL"
token = "$INGEST_TOKEN"
encoding.codec = "text"

In the above configuration example, you'll need to replace the following placeholders:

  • $LOGSCALE_LOGS_DIR — needs to be replaced with the path to the directory containing LogScale internal logs.

    Note

    Globbing has been used to specify which files to collect. In this example, *.log will capture all files in the LogScale log directory with the extention .log.

  • $YOUR_LOGSCALE_URL — should be replaced with the URL of your LogScale cluster that will be used for monitoring. See LogScale URLs & Endpoints

  • $INGEST_TOKEN — the ingest token from the repository on the cluster you'll use to monitor your LogScale cluster.

Once you've made those changes to the configuration file, start Vector and then check the repository for LogScale internal logs.

Filebeat

You can instead send LogScale internal logs via Filebeat. To do so, install Filebeat on all LogScale nodes within the cluster we are going to monitor.

After you've done so, edit your filebeat.yml configuration file to like the example below:

yaml
filebeat.inputs:
- paths:
  - $LOGSCALE_LOGS_DIR/humio-*.log
  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
  multiline.negate: true
  multiline.match: after

 queue.mem:
   events: 8000
   flush.min_events: 1000
   flush.timeout: 1s

 output:
   elasticsearch:
     hosts: ["$YOUR_LOGSCALE_URL"]
     password: "$INGEST_TOKEN"
     compression_level: 5
     bulk_max_size: 200

In the above configuration example, you'll need to replace the following placeholders:

  • $LOGSCALE_LOGS_DIR — the path to the directory containing LogScale internal logs.

    Note

    Globbing has been used to specify which files to collect. In this example, *.log will capture all files in the LogScale log directory with the extention .log.

  • $YOUR_LOGSCALE_URL — the URL of your LogScale cluster being used for monitoring.

    For Filebeat, use the Elastic Bulk Endpoint, for example cloud.humio.com:9200. See LogScale URLs & Endpoints

  • $INGEST_TOKEN — the ingest token from the repository on the cluster you'll use to monitor your LogScale cluster.

Once you've made those changes to the configuration file, start Filebeat and then check the repository to see if LogScale internal logs are being received.

Send LogScale Logs to LogScale Cloud

To assist in monitoring your on-premise instance of LogScale, it's possible to ship LogScale logs into LogScale Cloud. This is convenient in that you won't have to run and maintain another cluster. This also helps to share your internal logs with LogScale Support.

When getting logs shipped into LogScale Cloud you should already be in touch with LogScale Support. If it is agreed that we can get your logs setup into LogScale Cloud then this is what you should have setup.

Important

Sending on-premise logs to LogScale Cloud is only for CrowdStrike Support to be able to assist you. By sending logs, you're granting permission for LogScale employees to access this data, but only to assist in troubleshooting an explicit issue. We won't monitor this data without cause, and we'll only access it related to troubleshooting.

Prerequisites

There are a few things you'll need:

  • a LogScale Cloud account

  • The URL of the LogScale cloud instance where you want to send the logs. LogScale Support will instruct which cloud environmnt can be used. Typically this will either be https://cloud.humio.com:443/api/v1/ingest/elastic-bulk or https://cloud.us.humio.com/api/v1/ingest/elastic-bulk .

  • Open up any ports that required to send the data. The data will be sent encrypted using port 443. This port must be opened from your environment to LogScale to enable for data to be transferred.

  • a repository preferably with the format onprem_$orgName_debug

  • the humio/insights package installed on your repository

For example, below is a configuration for uploading logs directly to the LogScale US cluster:

yaml
sources:
  file:
    type: file
    sink: humio
    include:
      - ${HUMIO_LOGS_DIR}/*.log
    exclude:
      # humio-audit.log is included in humio-debug.log
      - ${HUMIO_LOGS_DIR}/humio-audit.log
    multiLineBeginsWith: ^[0-9]{4}-[0-9]{2}-[0-9]{2}

sinks:
  humio:
    type: humio
    token: $INGEST_TOKEN
    url: https://cloud.humio.com:443/api/v1/ingest/elastic-bulk

Contact support if you need a repository created.

Configure Log Shippers

You just need to ensure the $YOUR_LOGSCALE_URL is set to https://cloud.humio.com for EU Cloud or https://cloud.us.humio.com for US Cloud depending on where your LogScale Cloud account is.

Warning

The humio-debug.log can contain sensitive information. It contains logs of things like: E-mails of your LogScale users, queries, names of repositories, views and parsers, IP addresses and access logs from your LogScale nodes. It does not log any of your ingested events. Please ensure you are aware of this before shipping this log file into LogScale Cloud.

Remove Debug Logs

By default, Support sets a 30 day time limit on the repository to provide adequate amount of time for us to assist and troubleshoot. After 30 days, the data will be included in the removal process.

You can request to have these logs removed prior to the default 30 days if Support troubleshooting is no longer needed. In both of the cases described you must stop ingest from the log forwarder in order to remove all data.