After installing LogScale on a server, you'll want to put in place a system to feed data automatically into LogScale; this loading of information into is known as ingesting data. Configuring Data ingestion is an essential configuration step when you are setting up LogScale.
The following diagram provides an overview of the configuration flow to ingest data using LogScale.
Figure 61. Process graph
You can use different methods to ingest data depending on your requirements; OS, Log format and so on.
Ingesting FDR Data
LogScale can ingest Falcon Data Replicator (FDR) data into LogScale without having to configure log shippers, see Ingesting FDR Data for more information.
Ingesting AWS S3 Bucket Data
LogScale can now ingest and parse AWS S3 bucket data, see Ingesting Data from AWS S3 for more information.
S3 Ingest was introduced in v1.120.0
LogScale is able to ingest data from a wide range of log shippers, Log shippers use the Ingest API to send one or more logs to LogScale. A log shipper can handle multiple logs, multiple log types, manage the log storage on disk, and pre-process the logs before sending them to LogScale. Log shippers are covered in more detailed in Log Shippers.
LogScale Ingest Tokens
A repository can have one or more ingest tokens associated with it. Ingest tokens are used with the Ingest API to enable data to be routed to the right repository, and to associate a parser. See Ingest Tokens for more information.
The Ingest API can be used directly or through one of LogScale's APIs or software libraries. See the Ingest API reference page for more information. For a list of software that is supported, see the Software Libraries in the Appendix.
Parsing data Parsing the data that is ingested enables the information to be tagged, specific fields and elements of the log data to be extracted, and enables an additional level of detail. The use of a parser also enables the type of the data and fields extracted to be configured, supporting metrics, graphing and dashboards, see Parsing Data for more information.
In most cases you will want to use a log shipper or one of our platform integrations. If you are interested in getting some data into LogScale quickly, see the Use Case: Ingesting Application Logs tutorial page.
LogScale is optimized for live streaming of events in real time. If you ship data that are not live, you need to observe some basic rules so that the resulting events are stored in LogScale as efficiently as if they had been received live. See Backfilling Data.