Dynamic Configuration
Dynamic Configuration allows for setting configuration values for the cluster while it is running.
Table: Dynamic Configuration Parameters Table
Variable | Default Value | Availability | Description |
---|---|---|---|
BucketStorageKeySchemeVersion | 2 MB |
Allows to set a new format for the keys (file names) placed in the
bucket. When the new format is applied, the listing of files only
happens for the prefixes tmp/ and
globalsnapshots/ . The new format is applied only to
buckets created after this dynamic configuration has been set to
2 .
| |
BucketStorageUploadInfrequentThresholdDays | Sets the minimum number of days of remaining retention for the data in order to switch from the default "S3 Standard" to the storage class "S3 Intelligent-Tiering" in AWS S3. | ||
BucketStorageWriteVersion | 3 | Sets the format for files written to bucket storage to use a format that allows files larger than 2GB and incurs less memory pressure when decrypting files during download from the bucket. | |
defaultDigestReplicationFactor | 2 |
Allows configuration of the replication factor used for the
digest partitions table. Defaults to
If necessary, a different default can be explicitly set using
the | |
defaultSegmentReplicationFactor | 2 |
Controls segment file redundancy.
Defaults to | |
DelayIngestResponseDueToIngestLagScale | 300,000 milliseconds |
Sets the number of milliseconds of lag that adds
1 to the factor applied.
| |
DelayIngestResponseDueToIngestLagMaxFactor | 2 | Controls the mechanism that delays the response to a HTTP ingest request from nodes that also do digest when the digest node locally experiences digest lag, limiting how much longer than the actual execution it may be, measured as a factor on top of the actual time spent. | |
DelayIngestResponseDueToIngestLagThreshold | 20,000 milliseconds | Sets the number of milliseconds of digest lag where the feature starts to kick in. | |
FdrEnable | false | Used by administrators to turn FDR polling on/off on the entire cluster with a single update. | |
FdrExcludedNodes | empty | Used by administrators to exclude specific nodes from polling from FDR. | |
FdrMaxNodes | 5 | Used by administrators to put a cap on how many nodes at most should simultaneously poll data from the same FDR feed. | |
FlushSegmentsAndGlobalOnShutdown | false |
When set, and when When not set, this avoids the extra work and thus time shutting down from flushing very recent segments, as those can then be resumed on next boot, assuming that next boot continues on the same Kafka epoch. When set to the default, allows faster shutdown. | |
GraphQLSelectionSizeLimit | 1,000 | Controls the number of GraphQL queries on the total number of selected fields and fragments. | |
GroupDefaultLimit | 20,000 |
Default value for the limit parameter in
groupBy() , selfJoin()
and some other functions, when not specified. See
Limits & Standards for details.
| |
GroupMaxLimit | 1,000,000 |
Max value for the
limit parameter in
groupBy() function. See
Limits & Standards for details.
| |
IngestFeedAwsDownloadMaxObjectSize | 2GB | Max size of objects downloaded from S3 in bytes. | |
IngestFeedAwsProcessingDownloadBufferSize | 8MB | The size of the buffer when downloading in bytes. Increasing this will download larger parts at a time at the cost of additional memory consumption when polling. | |
IngestFeedAwsProcessingEventBufferSize | 1MB | The size of the buffer after preprocessing and splitting into individual events. This is also the max event size ingest feeds can ingest, note that other parts of LogScale also put restrictions on the max event size. | |
IngestFeedAwsProcessingEventsPerBatch | 1,000 | The number of events ingested per batch. | |
IngestFeedGovernorRateOverride | 100,000 | The change in rate when under/over the setpoint. Increasing this will make the governor more aggressive in changing the ingest rate. | |
IngestFeedGovernorIngestDelayHigh | 10 seconds |
The default ingest delay high setpoint for the ingest feed governor. If the ingest delay is greater than the value specified, ingest feeds may decrease ingest. The dynamic configuration can override the value of the environment variable. | |
IngestFeedGovernorIngestDelayLow | 5 seconds |
The default ingest delay low setpoint for the ingest feed governor. If the ingest delay is less than the value specified, ingest feeds may increase ingest. The dynamic configuration can override the value of the environment variable. | |
JoinRowLimit | 200,000 |
Max number of rows that
Used as an alternative to the environment variable
| |
LiveQueryMemoryLimit | 100,000,000 bytes | deprecated in 1.116.0 | Determines how much memory a live query can consume during its execution. |
LookupTableSyncAwaitSeconds | 30 seconds | The amount of time, in seconds, that LogScale is willing to wait when building a lookup table during parsing or event forwarding before the lookup operation fails. | |
MatchFilesMaxHeapFraction | 0.5 | Defines the total fraction of the heap we allow to be used for lookup tables. So the default value allows for half of the total configured heap to be used. This can be used to avoid out of memory (OOM) issues when using lots of lookup files. | |
MaxCsvFileUploadSizeBytes | 209,715,200 bytes |
Controls the maximum size of uploaded CSV files.
If | |
MaxIngestRequestSize | 33,554,432 | Size limit of ingest requests after content-encoding has been applied; expressed in bytes. | |
MaxJsonFileUploadSizeBytes | 104,857,600 bytes |
Controls the maximum size of uploaded JSON files.
If | |
MaxOpenSegmentsOnWorker | 50,000 | Controls the hard cap on open segment files for the scheduler. Do not modify this setting unless advised to do so by CrowdStrike Support. | |
QueryBacktrackingLimit | 2,000 | introduced in 1.139.0 |
Sets the number of query backtracks to limit a query iterating
over individual events too many times. For example, due to an
excessive use of copyEvent() ,
join() and split()
functions, or regex() with repeat-flags.
|
QueryCoordinatorMaxHeapFraction | 0.5 |
Controls query queueing based on the available memory in query
coordinator. To disable queing, set it to 1000 .
| |
QueryCoordinatorMemoryLimit | 4,000,000,000 bytes | Controls the amount of memory in bytes that the query coordinator can consume during execution per query. This limit directly influences how much memory a query is allowed to use and hold on to. The query coordinator needs to keep at least 4 representations of each query's state in memory, which in turn means that data a query can collect will at most be 1/4th of whatever this value is. This limit ensures that the coordinating nodes of a cluster do not run out of memory. | |
QueryMemoryLimit | 100,000,000 bytes | deprecated in 1.116.0 | Determines how much memory a non-live query can consume during its execution. |
QueryPartitionAutoBalance | Turns on/off automatic balancing of query partitions across nodes. It is used whenever there are changes to the set of live nodes tasked with doing query coordination in the cluster. Existing live queries will be migrated to the new preferred query coordinator node. Queries that do not currently support migration — such as those that have not completed their historical part and those that involve Join Query Functions — will not be migrated but need to be resubmitted. You can trigger a rebalance using the GraphQL mutation optimizeQueryPartitions. | ||
QueryResultRowCountLimit | Globally limits how many events a query can return. This flag can be set by administrators through GraphQL. | ||
RdnsDefaultLimit | 5,000 | introduced in 1.137.0 |
Used to set the default number of the resulting events allowed in
the rdns() function.
|
RdnsMaxLimit | 20,000 | introduced in 1.137.0 |
Used to set the maximum allowed number of resulting events in the
rdns() function.
|
ReplaceANSIEscapeCodes |
Controls whether LogScale replaces ANSI escape codes in the result
set. These are replaced with the \ufffd
character to prevent potential security issues when viewing the
returned data.
| ||
StateRowLimit | 20,000 |
Maximum number of rows allowed in functions.
Used as an alternative for the environment variable
| |
StaticQueryFractionOfCores | Limits queries from one organization (user on single-organization clusters) to run on a certain percentage of mapper threads at most, effectively throttling queries to prevent one organization from consuming all capacity. | ||
TargetMaxRateForDatasource | 2 MB | Sets the target maximum rate (in MB/s) of ingest for each shard of a datasource. | |
UnauthenticatedGraphQLSelectionSizeLimit | 150 | Controls the number of GraphQL queries on the total number of selected fields and fragments for unauthenticated users. | |
UndersizedMergingRetentionPercentage | 20 |
When selecting undersized segments to merge, this setting controls
how wide a time span can be merged together. The setting is
interpreted as a percentage of the repository's retention by time
setting. A reasonable range is
0 through to
90 .
|
Getting Dynamic Configuration List
To obtain a list of all the available dynamic configurations, use the dynamicConfigs() GraphQL query
query{
dynamicConfigs{
dynamicConfigKey
}
}
curl -v -X POST $YOUR_LOGSCALE_URL/graphql \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d @- << EOF
{"query" : "query{
dynamicConfigs{
dynamicConfigKey
}
}"
}
EOF
curl -v -X POST $YOUR_LOGSCALE_URL/graphql \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d @- << EOF
{"query" : "query{
dynamicConfigs{
dynamicConfigKey
}
}"
}
EOF
curl -v -X POST $YOUR_LOGSCALE_URL/graphql ^
-H "Authorization: Bearer $TOKEN" ^
-H "Content-Type: application/json" ^
-d @'{"query" : "query{ ^
dynamicConfigs{ ^
dynamicConfigKey ^
} ^
}" ^
} '
curl.exe -X POST
-H "Authorization: Bearer $TOKEN"
-H "Content-Type: application/json"
-d '{"query" : "query{
dynamicConfigs{
dynamicConfigKey
}
}"
}'
"$YOUR_LOGSCALE_URL/graphql"
#!/usr/bin/perl
use HTTP::Request;
use LWP;
my $TOKEN = "TOKEN";
my $uri = '$YOUR_LOGSCALE_URL/graphql';
my $json = '{"query" : "query{
dynamicConfigs{
dynamicConfigKey
}
}"
}';
my $req = HTTP::Request->new("POST", $uri );
$req->header("Authorization" => "Bearer $TOKEN");
$req->header("Content-Type" => "application/json");
$req->content( $json );
my $lwp = LWP::UserAgent->new;
my $result = $lwp->request( $req );
print $result->{"_content"},"\n";
#! /usr/local/bin/python3
import requests
url = '$YOUR_LOGSCALE_URL/graphql'
mydata = r'''{"query" : "query{
dynamicConfigs{
dynamicConfigKey
}
}"
}'''
resp = requests.post(url,
data = mydata,
headers = {
"Authorization" : "Bearer $TOKEN",
"Content-Type" : "application/json"
}
)
print(resp.text)
const https = require('https');
const data = JSON.stringify(
{"query" : "query{
dynamicConfigs{
dynamicConfigKey
}
}"
}
);
const options = {
hostname: '$YOUR_LOGSCALE_URL/graphql',
path: '/graphql',
port: 443,
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Content-Length': data.length,
Authorization: 'BEARER ' + process.env.TOKEN,
'User-Agent': 'Node',
},
};
const req = https.request(options, (res) => {
let data = '';
console.log(`statusCode: ${res.statusCode}`);
res.on('data', (d) => {
data += d;
});
res.on('end', () => {
console.log(JSON.parse(data).data);
});
});
req.on('error', (error) => {
console.error(error);
});
req.write(data);
req.end();
This will return a list of key/value pairs for the configuration key and current setting. For example:
{
"data": {
"dynamicConfigs": [
{
"dynamicConfigKey": "MaxIngestRequestSize",
"dynamicConfigValue": "None"
},
{
"dynamicConfigKey": "JoinRowLimit",
"dynamicConfigValue": "200000"
},
{
"dynamicConfigKey": "JoinDefaultLimit",
"dynamicConfigValue": "100000"
},
...
]
}
}
The exact list of configurable parameters will depend on the version, feature flags and environment.
Setting a Dynamic Configuration Value
Important
Changing Dynamic Config settings will instantly change the configuration setting and alter the operation of your LogScale instance. Contact Support if you need advice on these settings.
To set a Dynamic Config value, use the setDynamicConfig() mutation:
mutation {
setDynamicConfig(input: { config: NAME_OF_CONFIG, value: "VALUE_FOR_CONFIG" })
}
curl -v -X POST http://$YOUR_LOGSCALE_URL/graphql \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d @- << EOF
{"query" : "mutation {
setDynamicConfig(input: { config: NAME_OF_CONFIG, value: \"VALUE_FOR_CONFIG\" })
}"
}
EOF
curl -v -X POST http://$YOUR_LOGSCALE_URL/graphql \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d @- << EOF
{"query" : "mutation {
setDynamicConfig(input: { config: NAME_OF_CONFIG, value: \"VALUE_FOR_CONFIG\" })
}"
}
EOF
curl -v -X POST http://$YOUR_LOGSCALE_URL/graphql ^
-H "Authorization: Bearer $TOKEN" ^
-H "Content-Type: application/json" ^
-d @'{"query" : "mutation { ^
setDynamicConfig(input: { config: NAME_OF_CONFIG, value: \"VALUE_FOR_CONFIG\" }) ^
}" ^
} '
curl.exe -X POST
-H "Authorization: Bearer $TOKEN"
-H "Content-Type: application/json"
-d '{"query" : "mutation {
setDynamicConfig(input: { config: NAME_OF_CONFIG, value: \"VALUE_FOR_CONFIG\" })
}"
}'
"http://$YOUR_LOGSCALE_URL/graphql"
#!/usr/bin/perl
use HTTP::Request;
use LWP;
my $TOKEN = "TOKEN";
my $uri = 'http://$YOUR_LOGSCALE_URL/graphql';
my $json = '{"query" : "mutation {
setDynamicConfig(input: { config: NAME_OF_CONFIG, value: \"VALUE_FOR_CONFIG\" })
}"
}';
my $req = HTTP::Request->new("POST", $uri );
$req->header("Authorization" => "Bearer $TOKEN");
$req->header("Content-Type" => "application/json");
$req->content( $json );
my $lwp = LWP::UserAgent->new;
my $result = $lwp->request( $req );
print $result->{"_content"},"\n";
#! /usr/local/bin/python3
import requests
url = 'http://$YOUR_LOGSCALE_URL/graphql'
mydata = r'''{"query" : "mutation {
setDynamicConfig(input: { config: NAME_OF_CONFIG, value: \"VALUE_FOR_CONFIG\" })
}"
}'''
resp = requests.post(url,
data = mydata,
headers = {
"Authorization" : "Bearer $TOKEN",
"Content-Type" : "application/json"
}
)
print(resp.text)
const https = require('https');
const data = JSON.stringify(
{"query" : "mutation {
setDynamicConfig(input: { config: NAME_OF_CONFIG, value: \"VALUE_FOR_CONFIG\" })
}"
}
);
const options = {
hostname: 'http://$YOUR_LOGSCALE_URL/graphql',
path: '/graphql',
port: 443,
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Content-Length': data.length,
Authorization: 'BEARER ' + process.env.TOKEN,
'User-Agent': 'Node',
},
};
const req = https.request(options, (res) => {
let data = '';
console.log(`statusCode: ${res.statusCode}`);
res.on('data', (d) => {
data += d;
});
res.on('end', () => {
console.log(JSON.parse(data).data);
});
});
req.on('error', (error) => {
console.error(error);
});
req.write(data);
req.end();
{
"data" : {
"setDynamicConfig" : true
}
}
For example:
mutation {
setDynamicConfig(input: { config: GroupDefaultLimit, value: "30000" })
}
curl -v -X POST http://$YOUR_LOGSCALE_URL/graphql \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d @- << EOF
{"query" : "mutation {
setDynamicConfig(input: { config: GroupDefaultLimit, value: \"30000\" })
}"
}
EOF
curl -v -X POST http://$YOUR_LOGSCALE_URL/graphql \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d @- << EOF
{"query" : "mutation {
setDynamicConfig(input: { config: GroupDefaultLimit, value: \"30000\" })
}"
}
EOF
curl -v -X POST http://$YOUR_LOGSCALE_URL/graphql ^
-H "Authorization: Bearer $TOKEN" ^
-H "Content-Type: application/json" ^
-d @'{"query" : "mutation { ^
setDynamicConfig(input: { config: GroupDefaultLimit, value: \"30000\" }) ^
}" ^
} '
curl.exe -X POST
-H "Authorization: Bearer $TOKEN"
-H "Content-Type: application/json"
-d '{"query" : "mutation {
setDynamicConfig(input: { config: GroupDefaultLimit, value: \"30000\" })
}"
}'
"http://$YOUR_LOGSCALE_URL/graphql"
#!/usr/bin/perl
use HTTP::Request;
use LWP;
my $TOKEN = "TOKEN";
my $uri = 'http://$YOUR_LOGSCALE_URL/graphql';
my $json = '{"query" : "mutation {
setDynamicConfig(input: { config: GroupDefaultLimit, value: \"30000\" })
}"
}';
my $req = HTTP::Request->new("POST", $uri );
$req->header("Authorization" => "Bearer $TOKEN");
$req->header("Content-Type" => "application/json");
$req->content( $json );
my $lwp = LWP::UserAgent->new;
my $result = $lwp->request( $req );
print $result->{"_content"},"\n";
#! /usr/local/bin/python3
import requests
url = 'http://$YOUR_LOGSCALE_URL/graphql'
mydata = r'''{"query" : "mutation {
setDynamicConfig(input: { config: GroupDefaultLimit, value: \"30000\" })
}"
}'''
resp = requests.post(url,
data = mydata,
headers = {
"Authorization" : "Bearer $TOKEN",
"Content-Type" : "application/json"
}
)
print(resp.text)
const https = require('https');
const data = JSON.stringify(
{"query" : "mutation {
setDynamicConfig(input: { config: GroupDefaultLimit, value: \"30000\" })
}"
}
);
const options = {
hostname: 'http://$YOUR_LOGSCALE_URL/graphql',
path: '/graphql',
port: 443,
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Content-Length': data.length,
Authorization: 'BEARER ' + process.env.TOKEN,
'User-Agent': 'Node',
},
};
const req = https.request(options, (res) => {
let data = '';
console.log(`statusCode: ${res.statusCode}`);
res.on('data', (d) => {
data += d;
});
res.on('end', () => {
console.log(JSON.parse(data).data);
});
});
req.on('error', (error) => {
console.error(error);
});
req.write(data);
req.end();
{
"data" : {
"setDynamicConfig" : true
}
}