S3 Archiving Backlog
Determine the backlog for an S3 Archiving job to identify tasks affecting merges and potential disk overflow
Query
#kind=logs #vhost=* /S3Archiving/i "Backlog for dataspace"
timeChart(#vhost, function=max(count))
Introduction
Falcon LogScale supports S3 archiving set up per repository. This query shows a continuously increasing backlog for the S3 Archiving job. Since an S3 archiving job can postpone merges, archiving ingested logs can result in disk overflow.
Step-by-Step
Starting with the source repository events.
- flowchart LR; %%{init: {"flowchart": {"defaultRenderer": "elk"}} }%% repo{{Events}} 0[/Filter/] 1[/Filter/] result{{Result Set}} repo --> 0 0 --> 1 1 --> result style 0 fill:#ff0000,stroke-width:4px,stroke:#000;logscale
#kind=logs #vhost=* /S3Archiving/i "Backlog for dataspace"
Filters on all the logs that contain the vhost field. This way you can identify the different tasks.
- flowchart LR; %%{init: {"flowchart": {"defaultRenderer": "elk"}} }%% repo{{Events}} 0[/Filter/] 1[/Filter/] result{{Result Set}} repo --> 0 0 --> 1 1 --> result style 1 fill:#ff0000,stroke-width:4px,stroke:#000;logscale
timeChart(#vhost, function=max(count))
Formats the result in a timechart containing the field #vhost with the values of the maximum accounted jobs/tasks that have been archived.
Event Result set.
Summary and Results
The query is used to determine the backlog for an S3 Archiving job in order to identify tasks affecting merges and potential disk overflow.