Use Case: Ingesting Application Logs

If you're developing your own application and want to send its logs to LogScale, this article may be helpful. For the examples here, we'll use Filebeat to send application logs from your application's log file.

Refer to Elastic's Filebeat documentation for setting up Filebeat on your system.

Warning

Do not use the Elastic non-OSS version of Filebeats. It will not work with LogScale. For Filebeat OSS 7.x+, you need to be running LogScale 1.5.8 or higher.

Note

If you are using Docker to run your application, start by looking at the Docker Deployment documentation.

Filebeat Configuration

Filebeat sends logs as unstructured text. To parse these logs once they arrive at LogScale, you need to assign a parser. You do this by configuring Filebeat to add an additional field called @type. You set @type to the name of one of LogScale's built-in parsers or one of your own parsers.

For now assume we can use the built-in kv (Key-Value) parser, which extracts any fields of the form key=value from the log lines. See Parsing Data for more information on parsing log data.

Example Filebeat configuration with a custom log type:

logscale
filebeat.inputs:
- paths:
    - $PATH_TO_YOUR_APPLICATION_LOG
  fields:
    "@type": $PARSER_NAME
output.elasticsearch:
  hosts: ["https://$YOUR_LOGSCALE_URL/api/v1/ingest/elastic-bulk"]
  username: $INGEST_TOKEN
  • $YOUR_LOGSCALE_URL — address/hostname of your LogScale server.

  • $INGEST_TOKENIngest Tokens for your repository, for example, a string such as fS6Kdlb0clqe0UwPcc4slvNFP3Qn1COzG9DEVLw7v0Ii.

  • $PATH_TO_YOUR_APPLICATION_LOG — the file path to the log file you want to send.

  • $PARSER_NAME — the name of either one of the built-in parsers such as kv (Key-Value) or a Creating a Parser.

See the detailed documentation for Filebeat for more information.

Note

Remember to set a limit on the size of the log file, and rotate it so that you don't run out of disk space.