Sources & Examples

The following sections detail the specific configurations for each sources type along with example configuration files. Additionally, you can find a description of the fields below each example.

Show:

yaml
# Define the sink (destination) for the logs
sinks:
  logscale_sink:
    type: logscale  # Using LogScale as the destination
    url: "https://cloud.humio.com/"  # Replace with your LogScale instance URL
    token: "${LOGSCALE_TOKEN}"  # Use environment variable for the ingest token
    # Configure the queue for buffering events
    queue:
      type: memory  # Use a memory-based queue
      maxLimitInMB: 64  # Set the queue size to 64 MB
      # The queue size is reduced to 64 MB because the input is read from
      # persistent files. In case of a shutdown or network issues, the
      # collector can resume reading from where it left off, reducing the
      # need for a large buffer. This helps optimize memory usage while
      # still providing adequate buffering for most scenarios.

# Define the source for Apache access logs
sources:
  apache_access_logs:
    type: file  # File-based source
    include:
      - "/var/log/apache2/access.log"  # Path to Apache access log file
    # You can add multiple log files if needed
    # - "/var/log/apache2/other_access.log"
    
    # Optional: Exclude specific files or patterns
    # exclude:
    #   - "/var/log/apache2/access.log.1"
    #   - "/var/log/apache2/excluded_access.log"

    # Optional: Exclude files with specific extensions
    # excludeExtensions:
    #   - "gz"
    #   - "zip"

    # Configure multiline parsing if needed
    # multiLineBeginsWith: '^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'

    # Reference the sink defined above
    sink: logscale_sink

    # Optional: Specify a parser to be used in LogScale
    # parser: "apache_combined"

    # Add static fields to all events from this source (optional)
    # transforms:
    #   - type: static_fields
    #     fields:
    #       log_type: "apache_access"
    #       environment: "${ENV}"  # Use an environment variable

File Source

The file source allows you to ship logs from file sources using glob patterns and it also allows gzip and bzip2 compressed formats. When type is set to file the following configurations apply:

Table: File Source

ParameterTypeRequiredDefault ValueDescription
excludestringoptional[a]   Specify the file paths to exclude when collecting data. This field supports environment variable expansions. To use an environment variable, reference it using the syntax ${VAR}, where VAR is the name of the variable. The {}-braces may be omitted, however in that case the variable name can only contain: [a-z], [A-Z], [0-9] and "_".
excludeExtensionsstringoptional[a]   Specify the file extensions to exclude when collecting data. Some file extensions are automatically ignored even if they match an included pattern: xz, tgz, z, zip, 7z. To include all formats set excludeExtensions to an empty array. This will have the effect that files will not be decompressed before ingest.
inactivityTimeoutintegeroptional[a] 60 Specify the period of inactivity in seconds for a file being monitored before the file descriptor is closed to release system resource. Whenever the file changes, it is re-opened and the timeout restarted.
includestringoptional[a]   Specify the file paths to include when collecting data. This field supports environment variable expansions. To use an environment variable, reference it using the syntax ${VAR}, where VAR is the name of the variable. The {}-braces may be omitted, however in that case the variable name can only contain: [a-z], [A-Z], [0-9] and "_".
multiLineBeginsWithregexoptional[a]  

The file input can join consecutive lines together to create multiline events by using a regular expression. It can be configured to use a pattern to look for the beginning or the continuation of multiline events.

Example all multiline events beginning with a date, e.g. 2022 you would use:

yaml
multiLineBeginsWith: ^20\d{2}-

in this case every line that doesn't match the pattern, gets appended to the latest line that did.

multiLineContinuesWithregexoptional[a]  

The file input can join consecutive lines together to create multiline events by using a regular expression. It can be configured to use a pattern to look for the beginning or the continuation of multiline events. Lines that start with whitespace are continuations of the previous line. For example, to concatenate lines indented by whitespace (instead of starting at column 0):

yaml
multiLineContinuesWith: ^\s+

In this case every line that matches the pattern, gets appended to the latest line that didn't.

parserstringoptional[a]   Specify the parser within LogScale to use to parse the logs, if you install the parser through a package you must specify the type and name as displayed on the parsers page for example linux/system-logs:linux-filebeat. If a parser is assigned to the ingest token being used this parser will be ignored.
sinkregexoptional[a]   Name of the configured sink that should be sent the collected events
transformsstringoptional[a]   for more information, see MySourceName.

[a] Optional parameters use their default value unless explicitly set.


See Configuration Elements for information on the common elements in the configuration, for example sinks, and their configuration parameters and details on the structure of the configuration files.

File Rotation Support

The Falcon LogScale Collector strives to support all kinds of file rotation.

  • The Collector fingerprints files larger than 256 bytes and increases the fingerprint block size up to 4096 bytes, as applicable.

  • The Collector supports rotation using the following methods:

    • rename

    • compression

    • truncation

    Where rename and compression files are detected as duplicates. Compressed files are considered static. Renamed files keep their fingerprints and further updates are supported. When files are truncated, the read offset is set to the new size, which may be 0 or non-zero. In the situation where the file is truncated followed by a quick update, the read offset depends on the time between the write and the processing of the event.

Reading Compressed Files

The Falcon LogScale Collector supports reading gzip and bzip2 compressed files.

If gzip or bzip2 compressed files are matched by the configured include patterns, these will be auto detected as gzip/bzip2 files (using the magic number at the beginning of the file), decompressed and ingested.

By default files with the following extensions will be ignored/skipped even if they match a configured include pattern:

  • .xz

  • .tgz

  • .z

  • .zip

  • .7z

File extensions to ignore/skip can be configured with the excludeExtensions config option. The default is:

yaml
excludeExtensions: ["xz", "tgz", "z", "zip", "7z"]

If excludeExtensions is set to an empty array, it is possible to override the default setting. These files will not be decompressed before ingest. For example:

yaml
excludeExtensions: []

Effectively sends files in the compressed format.

If it for some reason is desired to exclude gzip and bzip files in addition to the other excluded file extensions, the following option can be used (provided the compressed files are named *.gz, *.bz2):

yaml
excludeExtensions: ["xz", "tgz", "z", "zip", "7z", "gz", "bz2"]