Archive data
Security Requirements and Controls
Change archiving settings
permission
LogScale supports archiving ingested logs to Amazon S3 and Google Cloud Storage. The archived logs are then available for further processing in any external system that integrates with the archiving provider. The files written by LogScale in this format are not searchable by LogScale — this is an export meant for other systems to consume.
When archiving is enabled all the events in repository are backfilled into the archiving platform and then it archives new events by running a periodic job inside all LogScale nodes, which looks for new, unarchived segment files. The segment files are read from disk, streamed to a bucket in the archiving provider's platform, and marked as archived in LogScale.
An administrator must set up archiving per repository. After selecting a
repository on LogScale, the configuration page is available under
Settings
.
Note
For slow-moving datasources it can take some time before segment files are completed on disk and then made available for the archiving job. In the worst case, before a segment file is completed, it must contain a gigabyte of uncompressed data or 30 minutes must have passed. The exact thresholds are those configured as the limits on mini segments.
For more information on segments files and datasources, see segment files and Datasources.