FluentD

FluentD is an open source data collector which can be used to collect and ship data to Humio.

Installation

Download and install FluentD following the FluentD Downloads Page installation guides.

Splunk Output Plugin (Recommended)

Install the Splunk Output Plugin.

out_splunk_hec Installation

Run the following command:

humio
fluent-gem install fluent-plugin-splunk-enterprise

Configuration

The section only aims to document the set of keys and value required to ship data to Humio and therefore not all of the configuration options which are available in fluentD are listed.

For more information on FluentD see their Quick Start Guide.

Editing the Configuration
  1. Open the td-agent.conf which is located in /etc/td-agent/td-agent.conf.

  2. Specify the following in the source section:

    • the type of plugin in the field @type.

    • the path of the logs to collect in the field path.

    • the path of the file on which Fluentd will record the position it read to in pos_file.

    • the tag to apply to the data tag.

    • the type of parser in the type under parse.

  3. Specify the value for the filter section, for example var.log then specify the following in the filter section to create an event processing pipeline to parse data before sending it to Humio:

    • set the field @type to record_transformer.

    • set to true to enable ssl use_ssl.

    • set the field hostname under record to "#{Socket.gethostname}".

  4. Specify the file to match to and set the following in the match section:

    • the type of input in the field @type

    • the URL of your Humio account host the URL of your Humio account.

    • the port of your Humio installation in the field port.

    • your Humio ingest token token.

    • set to true to enable ssl use_ssl.

  5. Specify the following in the buffer section:

    • set the flush_mode to interval.

    • set the how often the flush should be performed flush_interval.

    • the number of threads to flush set to 1 flush_thread_count.

    • set the overflow_action to block.

    • set to true retry_forever.

Configuration Example

Below is an example of how you might configure FluentD to monitor all files in a director, add metadata, and configure the output plugin:

ini files
<source>
  @type tail
  path /var/log/**
  pos_file /var/fluentd/data.pos
  tag var.log
  path_key filename
  <parse>
    @type none
  </parse>
</source>

<filter var.log>
  @type record_transformer
  <record>
    hostname "#{Socket.gethostname}"
  </record>
</filter>

<match var.log>
  @type                 splunk_hec
  host                  humio.example.com
  port                  443
  token                 ${MyIngestToken}
  use_ssl               true
  <buffer>
    @type               memory
    flush_mode          interval
    flush_interval      2
    flush_thread_count  2
    overflow_action     block
    retry_forever       true
  </buffer>
</match>

Replace ${MyIngestToken} with the value of an ingest token for the repository/parser you wish to send data to.

The buffer settings above are an example of the type of options that can be set, these should be adjusted to your requirements.

Configuration Objects

source

  • @type the type of plugin in the field.

  • path the path of the logs to collect in the field .

  • pos_file the path of the file on which Fluentd will record the position it read to.

  • tag the tag to apply to the data.

  • path_key the type of path being used.

parse

  • type the type of parser in the under parse.

filter

Specify the value for the filter section, for example var.log then specify following in the filter section to create an event processing pipeline to parse data before sending it to Humio:

  • @type set the field to record_transformer.

  • use_ssl set to true to enable ssl.

  • hostname set the field under record to "#{Socket.gethostname}".

match

Specify the file on which to perform the file to match.

  • @type the type of input.

  • host the URL of your Humio account the URL of your Humio account.

  • port the port of your Humio installation.

  • token your Humio ingest token.

  • use_ssl set to true to enable ssl.

buffer

Specify the following in the buffer section:

  • flush_mode set to interval.

  • flush_interval specifies how often the flush should be performed.

  • flush_thread_count the number of threads to flush set to 1.

  • overflow_action set to block.

  • retry_forever set to true.

Elastic Output Plugin (Deprecated)

Warning

Due to changes made by Elastic in the Elasticsearch open source libraries they are no longer compatible with non-Elastic instances of Elasticsearch. Even installing an older version of the FluentD elasticsearch plugin will not work as this typically builds with the latest versions of the dependencies.

You'll have to configure the Elasticsearch Output Plugin. Below is an example of how you might configure the output plugin:

ini files
<match **>
  @type           elasticsearch
  host            humio.example.com
  port            9200
  scheme          https
  ssl_version     TLSv1_2
  user            ${MyRepoName}
  password        ${MyIngestToken}
  logstash_format true
</match>

In the example here, host is the hostname of your Humio instance. The port is where Humio is exposing the Elastic endpoint. Don't forget to enable the ELASTIC_PORT variable. Replace MyRepoName with your Humio repository name and MyIngestToken with your ingest token.

Depending on whether TLS is enabled on host:port, scheme should be set to either https or http. In some cases it's necessary to specify the SSL version, so set ssl_version as you see here. The user should be repository name, and the password should be the Ingest Tokens.