Cluster Management API
This page provides information about the HTTP API for managing self-hosted LogScale installations.
All requests except the status endpoint require root-level access. See Managing Root Access.
Refer to the Cluster Management documentation for more details on how to perform common tasks like adding and removing nodes from a cluster.
Available Endpoints
Endpoint | Method | Description |
---|---|---|
/api/v1/bucket-storage-target
| GET, DELETE | Manage bucket storage targets. |
/api/v1/clusterconfig/members
| GET | List cluster nodes. |
/api/v1/clusterconfig/members/$NODE_ID
| GET, PUT | Get or modify a node in your cluster. |
/api/v1/clusterconfig/members/$NODE_ID
| DELETE | Deleting a node from your cluster. |
/api/v1/clusterconfig/segments/partitions/setdefaults
| POST | Applying default partition settings. |
/api/v1/clusterconfig/segments/partitions
| GET, POST | Querying and assigning storage partitions to nodes. |
/api/v1/clusterconfig/segments/partitions/set-replication-defaults
| POST | Assigning default storage partitions to nodes. |
/api/v1/clusterconfig/segments/distribute-evenly
| POST | Moving existing segments between nodes. |
/api/v1/clusterconfig/segments/prune-replicas
| POST | Pruning replicas when reducing replica setting. |
/api/v1/clusterconfig/segments/distribute-evenly-reshuffle-all
| POST | Moving existing segments between nodes. |
/api/v1/clusterconfig/segments/distribute-evenly-to-host/$NODE_ID
| POST | Moving existing segments between nodes. |
/api/v1/clusterconfig/segments/distribute-evenly-from-host/$NODE_ID
| POST | Moving existing segments between nodes. |
/api/v1/clusterconfig/ingestpartitions
| GET, POST | Get/Set digest partitions. |
/api/v1/clusterconfig/ingestpartitions/setdefaults
| POST | Set digest partitions defaults. |
/api/v1/clusterconfig/ingestpartitions/distribute-evenly-from-host/$NODE_ID
| POST | Move digest partitions from node. |
/api/v1/clusterconfig/kafka-queues/partition-assignment
| GET, POST | Managing Kafka queue settings. |
/api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults
| POST | Managing Kafka queue settings. |
/api/v1/repositories/$REPOSITORY_NAME/taggrouping
| GET,POST | Setup grouping of tags. |
/api/v1/repositories/$REPOSITORY_NAME/max-datasources
| GET | See the current default limit on the number of data sources. |
/api/v1/repositories/$REPOSITORY_NAME/max-datasources?number=${maxCount}
| POST | Set a new value for the max number of data sources allowed. |
/api/v1/repositories/$REPOSITORY_NAME/datasources/$DATASOURCEID/autosharding
| GET,POST,DELETE | Configure auto-sharding for high-volume data sources. |
/api/v1/status
| GET | Get status and version of node. |
/api/v1/missing-segments
| GET | Get list of all missing segments in CSV format. |
/api/v1/delete-missing-segments
| GET | Delete missing segments. |
Manage Bucket Storage Targets
For the GET
form, this endpoint returns the list of storage
buckets that LogScale is aware of:
GET /api/v1/bucket-storage-target
The output returns the list of known buckets, and a value for the storage used and managed by LogScale within those buckets.
[
{
"bucket" : "bucket1",
"id" : "1",
"keyPrefix" : "logscale/",
"provider" : "s3",
"readOnly" : false,
"region" : "us-east-1",
"segmentsUsingBucket" : 1277,
"uploadedFilesUsingBucket" : 1
}
]
LogScale keeps a record of all the buckets that have ever been
configured, even if the bucket is not in the current configuration. To
delete an entry for an older bucket, use the DELETE
command, specifying the bucket number
(id from the returned JSON):
DELETE /api/v1/bucket-storage-target/1
This command will delete the bucket entry, if LogScale thinks the bucket
no longer contains data that is still required by the cluster. To delete
the forcilby bucket, for example, when the bucket has been lost, use the
force
parameter set to true:
DELETE /api/v1/bucket-storage-target/1?force=true
Note that this will result in data loss.
List Cluster Members
GET /api/v1/clusterconfig/members
Example:
curl -H "Authorization: Bearer $TOKEN" "$YOUR_LOGSCALE_URL/api/v1/clusterconfig/members"
The returned value is a JSON of the current list of members:
[
{
"cores" : 4,
"diskUsagePercentage" : 29,
"displayName" : "localhost:8080",
"humioVersion" : "1.82.0--build-406818--sha-83da25d64199d5eea099adb7f4224ff3bef8ed5e",
"internalHostUri" : "http://localhost:8080",
"isBeingEvicted" : false,
"isEphemeral" : false,
"lifecycleState" : "Running",
"minimumCompatibleHumioVersion" : "1.44.0",
"nodeRoleString" : "all",
"queryCoordinator" : true,
"shouldPollFdr" : true,
"targetDiskUsagePercentage" : 90,
"totalDiskSpaceBytes" : 999995129856,
"uuid" : "lictrnKsAZ3zMgsd99YhmM4cyErUK98F",
"vhost" : 1
}
]
Adding a Node
See Adding a Node for information on adding nodes.
Modifying a Node in a Cluster
You can fetch/re-post the object representing the node in the cluster
using
GET
/PUT
requests. $NODE_ID
is the
integer-id of the new node.
GET /api/v1/clusterconfig/members/$NODE_ID
PUT /api/v1/clusterconfig/members/$NODE_ID
Example:
curl -H "Authorization: Bearer $TOKEN" "$YOUR_LOGSCALE_URL/api/v1/clusterconfig/members/1" > node-1.json
curl -XPUT -d @node-1.json -H "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" "$YOUR_LOGSCALE_URL/api/v1/clusterconfig/members/1"
outputs:
{"vhost":1,"uuid":"7q2LwHv6q3C5jmdGj3EYL1n56olAYcQy","internalHostUri":"$YOUR_LOGSCALE_URL","displayName":"host-1"}
You can edit the fields
internalHostUri
and
displayName
in this structure and
POST the resulting changes back to the server, preserving the vhost and
uuid fields.
Deleting a Node
See Removing a Node documentation on removing a node from a cluster.
If the host does not have any segments files, and no assigned partitions, there is no data loss when deleting a node.
DELETE /api/v1/clusterconfig/members/$NODE_ID
Example:
curl -XDELETE -H "Authorization: Bearer $TOKEN" "$YOUR_LOGSCALE_URL/api/v1/clusterconfig/members/1"
It is possible to drop a host, even if it has data and assigned
partitions, by adding the query parameter
accept-data-loss
with the value
true
.
Warning
This procedure silently drops your data.
Example:
curl -XDELETE -H "Authorization: Bearer $TOKEN" "$YOUR_LOGSCALE_URL/api/v1/clusterconfig/members/1?accept-data-loss=true"
Applying Default Partition Settings
This is a shortcut to getting all members of a cluster to have the same share of the load on both Digest and Storage Partitions.
POST /api/v1/clusterconfig/partitions/setdefaults
Example:
curl -XPOST -H "Authorization: Bearer $TOKEN" "$YOUR_LOGSCALE_URL/api/v1/clusterconfig/partitions/setdefaults"
Querying & Assigning Storage Partitions
Please refer to the Storage Rules documentation.
When a data segment is complete, the server selects the host(s) to place the segment on by looking up a segment-related key in the storage partition table. The partitions map to a set of nodes. All of these nodes are then assigned as owners of the segment and will start getting their copy shortly after.
You can modify the storage partitions at any time. Any number of partitions larger than the number of nodes is allowed, but the recommended the number of storage partitions is 24 or a similar fairly low number. There is no gain in having a large number of partitions.
Existing segments are not moved when re-assigning partitions. Partitions only affect segments completed after they are POST'ed.
GET /api/v1/clusterconfig/segments/partitions
POST /api/v1/clusterconfig/segments/partitions
Example:
curl -H "Authorization: Bearer $TOKEN" "$YOUR_LOGSCALE_URL/api/v1/clusterconfig/segments/partitions" > segments-partitions.json
curl -XPOST -d @segments-partitions.json -H "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" "$YOUR_LOGSCALE_URL/api/v1/clusterconfig/segments/partitions"
Assigning Default Storage Partitions to Nodes
When the set of nodes has been modified, you likely want to make the storage partitions distribute the storage load evenly among the current set of nodes. The following API allows doing that, while also selecting the number of replicas to use.
Any number of partitions larger than the number of nodes is allowed, but the recommended the number of storage partitions is 24 or similar fairly low number. There is no gain in having a large number of partitions.
The number of replicas must be at least one and at most the number of nodes in the cluster. The replicas select how many nodes should keep a copy of each completed segment.
POST /api/v1/clusterconfig/segments/partitions/set-replication-defaults
Example:
echo '{ "partitionCount": 7, "replicas": 2 }' > settings.json
curl -XPOST -d @settings.json -H "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" "$YOUR_LOGSCALE_URL/api/v1/clusterconfig/segments/partitions/set-replication-defaults"
Pruning Replicas when Reducing Replica Setting
If the number of replicas has been reduced, existing segments in the cluster do not get their replica count reduced. In order to reduce the number of replicas on existing segments, invoke this:
POST /api/v1/clusterconfig/segments/prune-replicas
curl -XPOST -H "Authorization: Bearer $TOKEN" "$YOUR_LOGSCALE_URL/api/v1/clusterconfig/segments/prune-replicas"
Moving Existing Segments Between Nodes
There is an API for taking the action of moving the existing segments between nodes.
Moving segments so that all nodes have their fair share of the segments, as stated in the storage partitioning setting, but as much as possible leaving segments where they are. It's also possible to apply the current partitioning scheme to all existing segments, possibly moving every segment to a new node.
It's possible to move all existing segments off a node. If that node is not assigned any partitions at all (both storage and ingest kinds), this then relieves the node of all duties, preparing it to be deleted from the cluster.
If a new node is added, and you want it to take its fair share of the currently stored data, use the "distribute-evenly-to-host" variant.
POST /api/v1/clusterconfig/segments/distribute-evenly
POST /api/v1/clusterconfig/segments/distribute-evenly-reshuffle-all
POST /api/v1/clusterconfig/segments/distribute-evenly-to-host/$NODE_ID
POST /api/v1/clusterconfig/segments/distribute-evenly-from-host/$NODE_ID
Hint
Add a "percentage=[0..100]" query parameter to only apply the action to a fraction of the full set.
Examples:
curl -XPOST -H "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" "$YOUR_LOGSCALE_URL/api/v1/clusterconfig/segments/distribute-evenly"
curl -XPOST -H "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" "$YOUR_LOGSCALE_URL/api/v1/clusterconfig/segments/distribute-evenly-reshuffle-all?percentage=3"
curl -XPOST -H "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" "$YOUR_LOGSCALE_URL/api/v1/clusterconfig/segments/distribute-evenly-to-host/1"
curl -XPOST -H "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" "$YOUR_LOGSCALE_URL/api/v1/clusterconfig/segments/distribute-evenly-from-host/7"
Digest Partitions
These route the incoming data while it is "in progress".
We recommend you read the Digest Rules documentation.
Warning
Do not POST
to this API unless the cluster is
running fine, with all members connected and active. All digest stops
for a few seconds when being applied.
Digest does not start before all nodes are ready, thus if a node is failing, digest does not resume.
GET
/POST
the setting to hand-edit where each partition goes. You cannot reduce the number of partitions.Invoke
setdefaults
to distribute the current number of partitions evenly among the known nodes in the clusterInvoke
distribute-evenly-from-host
to reassign partitions currently assigned to$NODE_ID
to the other nodes in the cluster.
GET /api/v1/clusterconfig/ingestpartitions
POST /api/v1/clusterconfig/ingestpartitions
POST /api/v1/clusterconfig/ingestpartitions/setdefaults
POST /api/v1/clusterconfig/ingestpartitions/distribute-evenly-from-host/$NODE_ID
Example:
curl -H "Authorization: Bearer $TOKEN" "$YOUR_LOGSCALE_URL/api/v1/clusterconfig/ingestpartitions" > digest-rules.json
curl -XPOST -d @digest-rules.json -H "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" "$YOUR_LOGSCALE_URL/api/v1/clusterconfig/ingestpartitions"
Managing Kafka Queue Settings
The ingest queues are partitions of the Kafka queue
humio-ingest
. LogScale offers an
API for editing the Kafka partition to broker assignments in this queue.
Note that changes to these settings are applied asynchronously, thus you
can get the previous settings, or a mix with the latest settings, for a
few seconds after applying a new set.
GET /api/v1/clusterconfig/kafka-queues/partition-assignment
POST /api/v1/clusterconfig/kafka-queues/partition-assignment
POST /api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults
Example:
curl -H "Authorization: Bearer $TOKEN" "$YOUR_LOGSCALE_URL/api/v1/clusterconfig/kafka-queues/partition-assignment" > kafka-ingest-partitions.json
curl -XPOST -d @kafka-ingest-partitions.json -H "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" "$YOUR_LOGSCALE_URL/api/v1/clusterconfig/kafka-queues/partition-assignment"
echo '{ "partitionCount": 24, "replicas": 2 }' > kafka-ingest-settings.json
curl -XPOST -d @kafka-ingest-settings.json -H "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" "$YOUR_LOGSCALE_URL/api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults"
Setup Grouping of Tags
GET /api/v1/repositories/$REPOSITORY_NAME/taggrouping
POST /api/v1/repositories/$REPOSITORY_NAME/taggrouping
Note
This is an advanced feature.
LogScale recommends that you only use the parser as a tag, in the field #type.
Using more tags may speed up queries on large data volumes, but only
works on a bounded value-set for the tag fields. The speed-up only
affects queries prefixed with
#tag=value
pairs that
significantly filter out input events.
Tags are the fields with a prefix of
#
. They are used internally to do
sharding of data into smaller streams. A
data source
is created for every
unique combination of tag values set by the clients (such as log
shippers). LogScale will reject ingested events once a certain number of
data sources get created. The limit is currently 10,000 per repository.
For some use cases, such as having the "client IP" from an access log as a tag, too many different tags will arise. For such a case, it is necessary to either stop having the field as a tag, or create a grouping rule on the tag field. Existing data is not rewritten when grouping rules are added or changed. Changing the grouping rules will thus in itself create more data sources.
Example: Setting the grouping rules for repository
$REPOSITORY_NAME
to hash the field
#host into 8 buckets, and
#client_ip
into 10 buckets. Note
how the field names do not include the
#
prefix in the rules.
curl $YOUR_LOGSCALE_URL/api/v1/repositories/$REPOSITORY_NAME/taggrouping \
-X POST \
-H "Authorization: Bearer $TOKEN" \
-H 'Content-Type: application/json' \
-d '[ {"field":"host","modulus": 8}, {"field":"client_ip","modulus": 10} ]'
Adding a new set of rules using POST
replaces
the current set. The previous sets are kept, and if a previous one
matches, then the previous one is reused. The previous rules are kept in
the system but may be deleted by LogScale once all data sources
referring them has been deleted (through retention settings).
When using grouped tags in the query field, you can expect to get a
speed-up of approximately the modulus compared to not including the tags
in the query, provided you use an exact match on the field. If you use a
wildcard (*
) in the value for the
grouped tag, the implementation currently scans all data sources that
have a non-empty value for that field and filter the events to only get
the results that match the wildcard pattern.
For non-grouped tag fields, it is efficient to use a wildcard at either end of the value string to match.
LogScale also supports auto-grouping of tags using the configuration
variables MAX_DISTINCT_TAG_VALUES
(default is
1000
) and
TAG_HASHING_BUCKETS
(default is
32
). LogScale checks
the number of distinct values for each key in each tag combination
against MAX_DISTINCT_TAG_VALUES
at regular intervals. If
this threshold is exceeded, a new grouping rule is added with the
modulus set to the value set in TAG_HASHING_BUCKETS
, but
only if there is no rule for that tag key. You can thus configure rules
using the API above and decide the number of buckets there. This is
preferable to auto-detecting, as the auto-detection works after the fact
and thus leaves a large number of unused data sources that will need to
get deleted by retention at some point. The auto-grouping support is
meant as a safety measure to avoid suddenly creating many data sources
by mistake for a single tag key.
If you are using a hosted LogScale instance while following this procedure, please contact support if you wish to add grouping rules to your repository.
Importing a Repository from Another LogScale Instance (BETA)
You can import users, dashboards and segments files from another
LogScale instance. You need to get a copy of the
/data/humio-data/global-data-snapshot.json
from the
origin server.
You also need to copy the segments files that you want to import. These
must be placed in the folder
/data/humio-data/ready_for_import_dataspaces
using
the following structure:
/data/humio-data/ready_for_import_dataspaces/dataspace_$ID
You should copy the files for the repository to the server into another folder while the copying is happening, and then move it to the proper name once it's ready. Note the name of the directory uses the internal ID of the repository, which is the directory name in the source system.
The folder
/data/humio-data/ready_for_import_dataspaces
must
be read+writeable for the
humio-user
running the server, as
it moves the files to another directory and deletes the imported files
when it is done with them, one at a time.
Example (note that you need both NAME and ID of the repository):
NAME="target-repo-name"
SRC_NAME="source-repo-name"
ID="my-repository-id"
sudo mkdir /data/humio-data/ready_for_import_dataspaces
sudo mv /data/from-other/dataspace_$ID /data/humio-data/ready_for_import_dataspaces
sudo chown -R humio /data/humio-data/ready_for_import_dataspaces/
curl -XPOST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN" \
-d @from-other-global-data-snapshot.json \
"$YOUR_LOGSCALE_URL/api/v1/importrepository/$NAME?importSegmentFilesOnly=true&importFromName=$SRC_NAME"
The POST
imports the metadata, such as users
and dashboards, and moves the repository folder from
/data/humio-data/ready_for_import_dataspaces
to
/data/humio-data/import
. A low-priority background
task will then import the actual segments files from that point on.
You can start using the ingest tokens and other data, that are not actual log-events as soon as the POST has completed.
You can run the POST
starting the import of the
same repository more than once. This is useful if you wish to import
only a fraction of the data files at first, but get all the metadata.
When you rerun the POST
, the metadata is
inserted/updated again, if it no longer matches only. The new repository
files will get copied at that point in time.
If you re-import the same segment files more than once, you get duplicate events in your target repository.
Note
We strongly recommend that you import to a new repository, at least until you have practiced this procedure. Having the newly imported data in a separate repository makes it easy to delete and try again, while deleting data from an existing repository will be very time consuming and error prone.
Set Data Sources Limits
LogScale supports control of the default number of data sources limit
for each repository. This is configured through the
MAX_DATASOURCES
environment variable.
The REST API endpoint max-datasources allows setting a new individual limit for the number of data sources on each repository.
Examples:
curl -H "Authorization: Bearer $TOKEN" "$YOUR_LOGSCALE_URL/api/v1/repositories/$REPOSITORY_NAME/max-datasources"
curl -XPOST -H "Authorization: Bearer $TOKEN" "$YOUR_LOGSCALE_URL/api/v1/repositories/$REPOSITORY_NAME/max-datasources?number=25000"
Configure Auto-Sharding for High-Volume Data Sources
A data source is ultimately bounded by the volume that one CPU thread can manage to compress and write to the filesystem. This is typically in the 1-4 TB/day range. To handle more ingest traffic from a specific data source, you need to provide more variability in the set of tags. But in some cases, it may not be possible or desirable to adjust the set of tags or tagged fields in the client. To solve this case, LogScale supports adding a synthetic tag, that is assigned a random number for each (small bulk) of events.
LogScale supports detecting if there is a high load on a data source,
and automatically triggers this auto-sharding on the data sources. You
will see this happening on "fast" data sources, typically if more than 2
TB/day is delivered to a single data source. The events then get an
extra tag, #humioAutoShard
that is
assigned a random integer value.
This is configured through the setting
AUTOSHARDING_TRIGGER_DELAY_MS
, which is compared to the
time an event spends in the ingest pipeline inside LogScale. When the
delay threshold is exceeded, the number of shards on that data source (a
combination of tags) is doubled. The default value for
AUTOSHARDING_TRIGGER_DELAY_MS
is 3,600,000 ms (3,600
seconds). The delay needs to be increasing as well, as noted two times
in a row at an interval of AUTOSHARDING_CHECKINTERVAL_MS
which defaults to 20,000 (20 seconds).
The setting AUTOSHARDING_MAX
controls how many different
data sources get created this way for each "real" data source. The
default value is 128. Internally, the number of cores and hosts reading
from the ingest queue is also taken into consideration, aiming at not
creating more shards than the total number of cores in the ingest part
of the cluster.
Configure Sticky Auto-Sharding for High-Volume Data Sources
In some use cases, it makes sense to disable the automatic tuning and
manage these settings using the API. Set AUTOSHARDING_MAX
to 1 to make the system never increase the number of autoshards of data
sources, then use the API to set sticky autosharding settings on the
selected data sources that require it. The sticky settings are not
limited by the AUTOSHARDING_MAX
configuration.
Examples of Sticky Autosharding Settings
# List both sticky and auto-managed settings for all data sources in the repository:
curl -H "Authorization: Bearer $TOKEN" "$YOUR_LOGSCALE_URL/api/v1/repositories/$REPOSITORY_NAME/autosharding"
# Get, Post, Delete settings for a specific data source:
curl -H "Authorization: Bearer $TOKEN" "$YOUR_LOGSCALE_URL/api/v1/repositories/$REPOSITORY_NAME/datasources/$DATASOURCEID/autosharding"
curl -XPOST -H "Authorization: Bearer $TOKEN" "$YOUR_LOGSCALE_URL/api/v1/repositories/$REPOSITORY_NAME/datasources/$DATASOURCEID/autosharding"
curl -XPOST -H "Authorization: Bearer $TOKEN" "$YOUR_LOGSCALE_URL/api/v1/repositories/$REPOSITORY_NAME/datasources/$DATASOURCEID/autosharding?number=7"
curl -XDELETE -H "Authorization: Bearer $TOKEN" "$YOUR_LOGSCALE_URL/api/v1/repositories/$REPOSITORY_NAME/datasources/$DATASOURCEID/autosharding"
Status Endpoint
The status endpoint can be used to check whether the node can be reached and which version it is running. This is useful as a smoke test after an update or as a health check to be used with service discovery tools such as Consul.
Example:
curl -s https://$YOUR_LOGSCALE_URL/api/v1/status
{"status":"ok","version":"1.8.2--build-11362--sha-173241677d"}
Missing Segments
If you end up in a situation where you have lost data/segment files, you need to remove these from Global.
The missing segments API will list all segments that no longer exist. The format is line-based and each line specifies one missing segment:
dataspaceId
datasourceId
segmentId
'repositoryName
' 'tagString
' startUnixTimeMs
endUnixTimeMs
This format matches the input format of the Delete Missing Segments API.
Example:
curl -s https://$YOUR_LOGSCALE_URL/api/v1/missing-segments
FWEClWu82dW46QqpOWFrGcUO SknGDKCcOScTxvkEEAMsgCvX KYneZu6juzjdsHbfH3YWn7YS 'developer' '@host=test_host@source=test_source@type=kv@' 1451606400000 1451606400000
Delete Missing Segments
This API uses the POST HTTP method and supports a number of options.
You can supply a list of segments to delete, the format is the same as the output from Missing Segments, but note that empty lines are disallowed. Strictly speaking, only the first three fields on each line are needed; the rest will be ignored. If errors occur during the deletion process they will be aborted.
You can provide the query parameter
ignoreErrors=true
(defaults to
false
). When using this, errors
during the deletion process will be ignored.
You can provide the query parameter
deleteAll=true
(defaults to
false
). Using this providing a
list of segments will not be needed. All missing segments in the cluster
will be deleted. This is a one-shot "command" that will clean up the
system.
Example - explicit list, fail on error:
curl -s https://$YOUR_LOGSCALE_URL/api/v1/delete-missing-segments \
-d'FWEClWu82dW46QqpOWFrGcUO SknGDKCcOScTxvkEEAMsgCvX KYneZu6juzjdsHbfH3YWn7YS' \
'developer' '@host=test_host@source=test_source@type=kv@' '1451606400000 1451606400000'
Example - explicit list, ignore errors:
curl -s 'https://$YOUR_LOGSCALE_URL/api/v1/delete-missing-segments?ignoreErrors=true' \
-d'FWEClWu82dW46QqpOWFrGcUO SknGDKCcOScTxvkEEAMsgCvX KYneZu6juzjdsHbfH3YWn7YS' \
'developer' '@host=test_host@source=test_source@type=kv@' 1451606400000 1451606400000'
Example - delete all, ignore errors:
curl -s -XPOST 'https://$YOUR_LOGSCALE_URL/api/v1/delete-missing-segments?ignoreErrors=true&deleteAll=true'
Resurrect Deleted Segments
This API endpoint allows undoing delete of recently deletes segments from lowering retention settings in particular. The endpoint will reset the "tombstone" on deleted segments internally, and restore all files that are still available in a bucket using bucket storage. By default LogScale will keep files in bucket storage for 7 days longer than the retention settings require. This means that extending retention by 7 days and then using this API can add approximately the latest 7 days worth of deleted events. In the case of a retention being lowered from the proper value to something very small, the 7 days allows you up to 7 days of time to revert the change to retention settings and invoke this API endpoint before any events are lost. Invoking this endpoint requires root access.
Example:
curl -s -XPOST 'https://$YOUR_LOGSCALE_URL/api/v1/$VIEWNAME/resurrect-deleted-segments'