Log to
When using a cluster in production, highly recommends sending internal logs to another cluster. This way, if you experience any problems with your production cluster, you are still able to determine what went wrong. This guide explains how to ship Internal Logging to another cluster.
You can use the humio/insights package to monitor a cluster. It comes with dashboards and saved queries that can be useful when debugging problems with .
Preparation
Assuming you have another cluster ready to receive your production clusters' logs, perform the following steps:
Create a repository on your monitoring cluster. This will be where you ship internal logs.
Identify the URL of where you are sending the logs. Depending on the method and log shipper that you choose, the exact URL may be different. See URLs and Endpoints for more details on the endpoints you can use.
Install the humio/insights package on the newly created repository. This includes all of the dashboards, queries and parsers used to monitor the other cluster.
Now create an ingest token and connect it to the parser named humio. This is part of the humio/insights package once installed.
Open the appropriate ports on your firewall and hosts to allow communication with the remote cluster. For more information on the URL to use, see the notes below and the URLs and Endpoints page. In general, this will be:
Port 443 when using Vector or the Falcon
Port 9200 when using Filebeat or a log shipper that makes use fo the Elastic bulk ingest endpoint
At this point, your system is prepared. Next, you will configure a log shipper to send logs.
Configure a log shipper
There are a few steps in particular necessary to configure a log shipper to send logs to another system.
recommends shipping logs using the . To install it, see the Falcon LogScale Collector documentation.
After you have it installed, you'll need to edit your configuration file to look like this:
sources:
file:
type: file
sink: humio
include:
- $LOGSCALE_LOGS_DIR/*.log
exclude:
# humio-audit.log is included in humio-debug.log
- $LOGSCALE_LOGS_DIR/humio-audit.log
multiLineBeginsWith: ^[0-9]{4}-[0-9]{2}-[0-9]{2}
sinks:
humio:
type: humio
token: $INGEST_TOKEN
url: $YOUR_LOGSCALE_URL
In the above configuration you need to replace the following:
$LOGSCALE_LOGS_DIR
— the path to directory containing internal logs.Note
Globbing has been used to specify which files to collect. In this example,
*.log
will capture all files in the log directory with the extension.log
.$YOUR_LOGSCALE_URL
— the URL of your cluster being used for monitoring. You do not need to specify the full path, but will need to use the full hostname and port as appropriate. See URLs and Endpoints$INGEST_TOKEN
— The ingest token for the repository on the cluster we are going to be using to monitor our cluster.
Vector
Vector is a lightweight agent that may be used to send logs. It has built-in support for shipping logs to via the humio_logs sink.
To use Vector, you'll have to install it on all nodes within the cluster you want to monitor. See the Vector documentation on how to install vector.
After you've finished installing it, edit the
vector.toml
configuration file
to look like the following:
[sources.logs]
type = "file"
include = ["$LOGSCALE_LOGS_DIR/humio*.log"]
[sources.logs.multiline]
start_pattern = "^[0-9]{4}-[0-9]{2}-[0-9]{2}"
mode = "halt_before"
condition_pattern = "^[0-9]{4}-[0-9]{2}-[0-9]{2}"
timeout_ms = 2000
# Humio Sink: https://vector.dev/docs/reference/sinks/humio_logs/
[sinks.humio_cluster]
type = "humio_logs"
inputs = ["logs"]
compression = "gzip"
host = "$YOUR_LOGSCALE_URL"
token = "$INGEST_TOKEN"
encoding.codec = "text"
In the above configuration example, you'll need to replace the following placeholders:
$LOGSCALE_LOGS_DIR
— needs to be replaced with the path to the directory containing internal logs.Note
Globbing has been used to specify which files to collect. In this example,
*.log
will capture all files in the log directory with the extension.log
.Replace
$YOUR_LOGSCALE_URL
with the URL of your cluster that will be used for monitoring. See URLs and Endpoints$INGEST_TOKEN
— the ingest token from the repository on the cluster you'll use to monitor your cluster.
Once you've made those changes to the configuration file, start Vector and then check the repository for internal logs.
Filebeat
You can also send internal logs via Filebeat. To do so, install Filebeat on all nodes within the cluster you are going to monitor.
After you've done so, edit your
filebeat.yml
configuration file
to like the example below:
filebeat.inputs:
- paths:
- $LOGSCALE_LOGS_DIR/humio-*.log
multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
multiline.negate: true
multiline.match: after
queue.mem:
events: 8000
flush.min_events: 1000
flush.timeout: 1s
output:
elasticsearch:
hosts: ["$YOUR_LOGSCALE_URL"]
password: "$INGEST_TOKEN"
compression_level: 5
bulk_max_size: 200
In the above configuration example, you'll need to replace the following placeholders:
$LOGSCALE_LOGS_DIR
— the path to the directory containing internal logs.Note
Globbing has been used to specify which files to collect. In this example,
*.log
will capture all files in the log directory with the extension.log
.$YOUR_LOGSCALE_URL
— the URL of your cluster being used for monitoring.For Filebeat, use the Elastic Bulk Endpoint, for example
cloud.humio.com:9200
. See URLs and Endpoints$INGEST_TOKEN
— the ingest token from the repository on the cluster you'll use to monitor your cluster.
Once you've made those changes to the configuration file, start Filebeat and then check the repository to see if internal logs are being received.
Send logs to Cloud
Sending your logs to Cloud can be an advantage, allowing support to more quickly resolve issues. There's no additional cost or subscription necessary to send your logs this way.
Warning
The humio-debug.log
can contain
sensitive information. It contains logs of things like: E-mails of your
users, queries, names of repositories, views and
parsers, IP addresses and access logs from your nodes.
It does not log any of your ingested events. Be aware of this before
shipping this log file into Cloud.
Continuous streaming of logs to Cloud
To assist in troubleshooting your self-hosted instance of , recommends you stream your logs into a Cloud instance. This allows Support to immediately investigate your cluster logs when you submit any issues to our helpdesk, expediting the investigation. If you did not set this up during onboarding, contact Support, and they will get an instance configured for you.
Important
When sending self-hosted logs to Cloud only Support is able to assist you. By sending logs, you are granting permission for CrowdStrike employees to access this data, but only to assist in troubleshooting an explicit issue. CrowdStrike will not monitor this data without cause, and will only access it related to troubleshooting.
These logs will only be accessible by the Support team, and do not replace your own cluster monitoring practices.
Configure continuous streaming of logs
Support will setup a repository for you to ship to, and share an ingest token and URL for you to use when shipping logs. You will then need to configure a shipper following the instructions.
The $YOUR_LOGSCALE_URL
value will depend on your
region. Support will provide you the URL to use. It will generally be
either https://cloud.humio.com:443/api/v1/ingest/elastic-bulk
or https://cloud.us.humio.com/api/v1/ingest/elastic-bulk
.
Log retention
By default, Support sets a 30 day time limit on the repository to provide adequate amount of time for us to assist and troubleshoot. After 30 days, the data will be removed following the platform's removal process.
You can request a shorter retention period set on your data, if desired. This will limit the timeframe that Support can investigate any issues.
One-time shipping of logs to Cloud
If you are not able to ship logs to Cloud continuously, Support will need to receive logs from your system to troubleshoot any issues you raise in a Support case.
To receive these logs, Support will setup a repository and provide you the necessary details to ship logs into the repository. Following the log shipper configuration instructions, set the file to your static file containing the excerpted logs.
If you are sending data that is more than 30 days old, make sure Support is aware so that the data will be retained on ingest.
Note
If the logs needed to troubleshoot your issue are less than 10 GB, Support can instead provide you a secure file upload on CrowdStrike's file service (currently box.com). Support will then ingest these logs into a repository built for this purpose, and retain them no longer than 30 days from ingestion.