Dynamic Configuration
Dynamic Configuration allows for setting configuration values for the cluster while it is running.
Table: Dynamic Configuration Parameters Table
Variable | Default Value | Availability | Description |
---|---|---|---|
BucketStorageKeySchemeVersion | 2 |
Allows to set a new format for the keys (file names) placed in the
bucket. When the new format is applied, the listing of files only
happens for the prefixes tmp/ and
globalsnapshots/ . The new format is applied only to
buckets created after this dynamic configuration has been set to
2 .
| |
BucketStorageUploadInfrequentThresholdDays | Sets the minimum number of days of remaining retention for the data in order to switch from the default "S3 Standard" to the storage class "S3 Intelligent-Tiering" in AWS S3. | ||
BucketStorageWriteVersion | 3 | Sets the format for files written to bucket storage to use a format that allows files larger than 2GB and incurs less memory pressure when decrypting files during download from the bucket. | |
defaultDigestReplicationFactor | 2 |
Allows configuration of the replication factor used for the
digest partitions table. Defaults to | |
defaultSegmentReplicationFactor | 2 |
Controls segment file redundancy.
Defaults to | |
DelayIngestResponseDueToIngestLagScale | 300,000 |
Sets the number of milliseconds of lag that adds
1 to the factor applied.
| |
DelayIngestResponseDueToIngestLagMaxFactor | 2 | Controls the mechanism that delays the response to a HTTP ingest request from nodes that also do digest when the digest node locally experiences digest lag, limiting how much longer than the actual execution it may be, measured as a factor on top of the actual time spent. | |
DelayIngestResponseDueToIngestLagThreshold | 20,000 | Sets the number of milliseconds of digest lag where the feature starts to kick in. | |
FdrEnable | false | Used by administrators to turn FDR polling on/off on the entire cluster with a single update. | |
FdrExcludedNodes | empty | Used by administrators to exclude specific nodes from polling from FDR. | |
FdrMaxNodes | 5 | Used by administrators to put a cap on how many nodes at most should simultaneously poll data from the same FDR feed. | |
FlushSegmentsAndGlobalOnShutdown | false |
When set, forces all in-progress segments to be closed and uploaded to bucket, and also forces a write (and upload) of global snapshot during shutdown. When not set, this avoids the extra work and thus time shutting down from flushing very recent segments, as those can then be resumed on next boot, assuming that next boot continues on the same Kafka epoch. When set to the default, allows faster shutdown. | |
GraphQLSelectionSizeLimit | 1,000 | Controls the number of GraphQL queries on the total number of selected fields and fragments. | |
GraphQlDirectivesAmountLimit | 25 | introduced in 1.145.0 | Restricts how many GraphQL directives can be in a query. Valid values are integers from 5 to 1,000. |
GroupDefaultLimit | 20,000 |
Default value for the limit parameter in
groupBy() , selfJoin()
and some other functions, when not specified. See
Limits & Standards for details.
| |
GroupMaxLimit | 1,000,000 |
Max value for the
limit parameter in
groupBy() function. See
Limits & Standards for details.
| |
IngestFeedAwsDownloadMaxObjectSize | 2GB | Max size of objects downloaded from S3 in bytes. | |
IngestFeedAwsProcessingDownloadBufferSize | 8MB | The size of the buffer when downloading in bytes. Increasing this will download larger parts at a time at the cost of additional memory consumption when polling. | |
IngestFeedAwsProcessingEventBufferSize | 1MB | The size of the buffer after preprocessing and splitting into individual events. This is also the max event size ingest feeds can ingest, note that other parts of LogScale also put restrictions on the max event size. | |
IngestFeedAwsProcessingEventsPerBatch | 1,000 | The number of events ingested per batch. | |
IngestFeedGovernorRateOverride | 100,000 | The change in rate when under/over the setpoint. Increasing this will make the governor more aggressive in changing the ingest rate. | |
IngestFeedGovernorIngestDelayHigh | 10 seconds |
The default ingest delay high setpoint for the ingest feed governor. If the ingest delay is greater than the value specified, ingest feeds may decrease ingest. | |
IngestFeedGovernorIngestDelayLow | 5 seconds |
The default ingest delay low setpoint for the ingest feed governor. If the ingest delay is less than the value specified, ingest feeds may increase ingest. | |
JoinRowLimit | 200,000 |
Max number of rows that | |
LiveQueryMemoryLimit | 100,000,000 | Determines how much memory a live query can consume during its execution. | |
LookupTableSyncAwaitSeconds | 30 | The amount of time, in seconds, that LogScale is willing to wait when building a lookup table during parsing or event forwarding before the lookup operation fails. | |
MatchFilesMaxHeapFraction | 0.5 | Defines the total fraction of the heap we allow to be used for lookup tables. So the default value allows for half of the total configured heap to be used. This can be used to avoid out of memory (OOM) issues when using lots of lookup files. | |
MaxCsvFileUploadSizeBytes | 209,715,200 bytes |
Controls the maximum size of uploaded CSV files. | |
MaxIngestRequestSize | 33,554,432 | Size limit of ingest requests after content-encoding has been applied; expressed in bytes. | |
MaxJsonFileUploadSizeBytes | 104,857,600 bytes |
Controls the maximum size of uploaded JSON files. | |
MaxOpenSegmentsOnWorker | 50,000 | Controls the hard cap on open segment files for the scheduler. Do not modify this setting unless advised to do so by CrowdStrike Support. | |
QueryBacktrackingLimit | 3,000 |
Allows to limit a query iterating over individual events too many
times, for example due to an excessive use of
copyEvent() , join() and
split() functions, or
regex() with repeat-flags.
| |
QueryCoordinatorMaxHeapFraction | 0.5 |
Controls query queueing based on the available memory in query
coordinator. To disable queing, set it to 1000 .
| |
QueryCoordinatorMemoryLimit | 4 | Controls the maximum memory usage of the coordinating node. This memory limit will, in turn, determine the limits of the static query state size and the live query state size. | |
QueryMemoryLimit | 100,000,000 | Determines how much memory a non-live query can consume during its execution. | |
QueryPartitionAutoBalance | Turns on/off automatic balancing of query partitions across nodes. It is used whenever there are changes to the set of live nodes tasked with doing query coordination in the cluster. Existing live queries will be migrated to the new preferred query coordinator node. Queries that do not currently support migration — such as those that have not completed their historical part and those that involve Join Query Functions — will not be migrated but need to be resubmitted. You can trigger a rebalance using the GraphQL mutation optimizeQueryPartitions. | ||
QueryResultRowCountLimit | Globally limits how many events a query can return. This flag can be set by administrators through GraphQL. | ||
RdnsDefaultLimit | 5,000 |
Used to set the default number of the resulting events allowed in
the rdns() function.
| |
RdnsMaxLimit | 20,000 |
Used to set the maximum allowed number of resulting events in the
rdns() function.
| |
ReplaceANSIEscapeCodes |
Controls whether LogScale replaces ANSI escape codes in the result
set. These are replaced with the \ufffd
character to prevent potential security issues when viewing the
returned data.
| ||
StateRowLimit | 20,000 |
Maximum number of rows allowed in functions. | |
StaticQueryFractionOfCores | Limits queries from one organization (user on single-organization clusters) to run on a certain percentage of mapper threads at most, effectively throttling queries to prevent one organization from consuming all capacity. | ||
TableCacheMaxStorageFraction | 0.001 | introduced in 1.148.0 |
Sets the fraction of disk space to be used for caching file data
used for query functions such as match() and
readFile() .
|
TableCacheMaxStorageFractionForIngestAndHttpOnly | 0.1 | introduced in 1.148.0 |
Sets the fraction of disk space to be used on
ingest
or
httponly
node for caching file data used for query functions such as
match() and readFile() .
|
TargetMaxRateForDatasource | 2 | Sets the target maximum rate of ingest for each shard of a datasource. | |
UnauthenticatedGraphQLSelectionSizeLimit | 150 | Controls the number of GraphQL queries on the total number of selected fields and fragments for unauthenticated users. | |
UndersizedMergingRetentionPercentage | 20 |
When selecting undersized segments to merge, this setting controls
how wide a time span can be merged together. The setting is
interpreted as a percentage of the repository's retention by time
setting. A reasonable range is
0 through to
90 .
|
Getting Dynamic Configuration List
To obtain a list of all the available dynamic configurations, use the dynamicConfigs() GraphQL query
query{
dynamicConfigs{
dynamicConfigKey
}
}
curl -v -X POST $YOUR_LOGSCALE_URL/graphql \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d @- << EOF
{"query" : "query{
dynamicConfigs{
dynamicConfigKey
}
}"
}
EOF
curl -v -X POST $YOUR_LOGSCALE_URL/graphql \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d @- << EOF
{"query" : "query{
dynamicConfigs{
dynamicConfigKey
}
}"
}
EOF
curl -v -X POST $YOUR_LOGSCALE_URL/graphql ^
-H "Authorization: Bearer $TOKEN" ^
-H "Content-Type: application/json" ^
-d @'{"query" : "query{ ^
dynamicConfigs{ ^
dynamicConfigKey ^
} ^
}" ^
} '
curl.exe -X POST
-H "Authorization: Bearer $TOKEN"
-H "Content-Type: application/json"
-d '{"query" : "query{
dynamicConfigs{
dynamicConfigKey
}
}"
}'
"$YOUR_LOGSCALE_URL/graphql"
#!/usr/bin/perl
use HTTP::Request;
use LWP;
my $TOKEN = "TOKEN";
my $uri = '$YOUR_LOGSCALE_URL/graphql';
my $json = '{"query" : "query{
dynamicConfigs{
dynamicConfigKey
}
}"
}';
my $req = HTTP::Request->new("POST", $uri );
$req->header("Authorization" => "Bearer $TOKEN");
$req->header("Content-Type" => "application/json");
$req->content( $json );
my $lwp = LWP::UserAgent->new;
my $result = $lwp->request( $req );
print $result->{"_content"},"\n";
#! /usr/local/bin/python3
import requests
url = '$YOUR_LOGSCALE_URL/graphql'
mydata = r'''{"query" : "query{
dynamicConfigs{
dynamicConfigKey
}
}"
}'''
resp = requests.post(url,
data = mydata,
headers = {
"Authorization" : "Bearer $TOKEN",
"Content-Type" : "application/json"
}
)
print(resp.text)
const https = require('https');
const data = JSON.stringify(
{"query" : "query{
dynamicConfigs{
dynamicConfigKey
}
}"
}
);
const options = {
hostname: '$YOUR_LOGSCALE_URL/graphql',
path: '/graphql',
port: 443,
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Content-Length': data.length,
Authorization: 'BEARER ' + process.env.TOKEN,
'User-Agent': 'Node',
},
};
const req = https.request(options, (res) => {
let data = '';
console.log(`statusCode: ${res.statusCode}`);
res.on('data', (d) => {
data += d;
});
res.on('end', () => {
console.log(JSON.parse(data).data);
});
});
req.on('error', (error) => {
console.error(error);
});
req.write(data);
req.end();
This will return a list of key/value pairs for the configuration key and current setting. For example:
{
"data": {
"dynamicConfigs": [
{
"dynamicConfigKey": "MaxIngestRequestSize",
"dynamicConfigValue": "None"
},
{
"dynamicConfigKey": "JoinRowLimit",
"dynamicConfigValue": "200000"
},
{
"dynamicConfigKey": "JoinDefaultLimit",
"dynamicConfigValue": "100000"
},
...
]
}
}
The exact list of configurable parameters will depend on the version, feature flags and environment.
Setting a Dynamic Configuration Value
Important
Changing Dynamic Config settings will instantly change the configuration setting and alter the operation of your LogScale instance. Contact Support if you need advice on these settings.
To set a Dynamic Config value, use the setDynamicConfig() mutation:
mutation {
setDynamicConfig(input: { config: NAME_OF_CONFIG, value: "VALUE_FOR_CONFIG" })
}
curl -v -X POST http://$YOUR_LOGSCALE_URL/graphql \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d @- << EOF
{"query" : "mutation {
setDynamicConfig(input: { config: NAME_OF_CONFIG, value: \"VALUE_FOR_CONFIG\" })
}"
}
EOF
curl -v -X POST http://$YOUR_LOGSCALE_URL/graphql \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d @- << EOF
{"query" : "mutation {
setDynamicConfig(input: { config: NAME_OF_CONFIG, value: \"VALUE_FOR_CONFIG\" })
}"
}
EOF
curl -v -X POST http://$YOUR_LOGSCALE_URL/graphql ^
-H "Authorization: Bearer $TOKEN" ^
-H "Content-Type: application/json" ^
-d @'{"query" : "mutation { ^
setDynamicConfig(input: { config: NAME_OF_CONFIG, value: \"VALUE_FOR_CONFIG\" }) ^
}" ^
} '
curl.exe -X POST
-H "Authorization: Bearer $TOKEN"
-H "Content-Type: application/json"
-d '{"query" : "mutation {
setDynamicConfig(input: { config: NAME_OF_CONFIG, value: \"VALUE_FOR_CONFIG\" })
}"
}'
"http://$YOUR_LOGSCALE_URL/graphql"
#!/usr/bin/perl
use HTTP::Request;
use LWP;
my $TOKEN = "TOKEN";
my $uri = 'http://$YOUR_LOGSCALE_URL/graphql';
my $json = '{"query" : "mutation {
setDynamicConfig(input: { config: NAME_OF_CONFIG, value: \"VALUE_FOR_CONFIG\" })
}"
}';
my $req = HTTP::Request->new("POST", $uri );
$req->header("Authorization" => "Bearer $TOKEN");
$req->header("Content-Type" => "application/json");
$req->content( $json );
my $lwp = LWP::UserAgent->new;
my $result = $lwp->request( $req );
print $result->{"_content"},"\n";
#! /usr/local/bin/python3
import requests
url = 'http://$YOUR_LOGSCALE_URL/graphql'
mydata = r'''{"query" : "mutation {
setDynamicConfig(input: { config: NAME_OF_CONFIG, value: \"VALUE_FOR_CONFIG\" })
}"
}'''
resp = requests.post(url,
data = mydata,
headers = {
"Authorization" : "Bearer $TOKEN",
"Content-Type" : "application/json"
}
)
print(resp.text)
const https = require('https');
const data = JSON.stringify(
{"query" : "mutation {
setDynamicConfig(input: { config: NAME_OF_CONFIG, value: \"VALUE_FOR_CONFIG\" })
}"
}
);
const options = {
hostname: 'http://$YOUR_LOGSCALE_URL/graphql',
path: '/graphql',
port: 443,
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Content-Length': data.length,
Authorization: 'BEARER ' + process.env.TOKEN,
'User-Agent': 'Node',
},
};
const req = https.request(options, (res) => {
let data = '';
console.log(`statusCode: ${res.statusCode}`);
res.on('data', (d) => {
data += d;
});
res.on('end', () => {
console.log(JSON.parse(data).data);
});
});
req.on('error', (error) => {
console.error(error);
});
req.write(data);
req.end();
{
"data" : {
"setDynamicConfig" : true
}
}
For example:
mutation {
setDynamicConfig(input: { config: GroupDefaultLimit, value: "30000" })
}
curl -v -X POST http://$YOUR_LOGSCALE_URL/graphql \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d @- << EOF
{"query" : "mutation {
setDynamicConfig(input: { config: GroupDefaultLimit, value: \"30000\" })
}"
}
EOF
curl -v -X POST http://$YOUR_LOGSCALE_URL/graphql \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d @- << EOF
{"query" : "mutation {
setDynamicConfig(input: { config: GroupDefaultLimit, value: \"30000\" })
}"
}
EOF
curl -v -X POST http://$YOUR_LOGSCALE_URL/graphql ^
-H "Authorization: Bearer $TOKEN" ^
-H "Content-Type: application/json" ^
-d @'{"query" : "mutation { ^
setDynamicConfig(input: { config: GroupDefaultLimit, value: \"30000\" }) ^
}" ^
} '
curl.exe -X POST
-H "Authorization: Bearer $TOKEN"
-H "Content-Type: application/json"
-d '{"query" : "mutation {
setDynamicConfig(input: { config: GroupDefaultLimit, value: \"30000\" })
}"
}'
"http://$YOUR_LOGSCALE_URL/graphql"
#!/usr/bin/perl
use HTTP::Request;
use LWP;
my $TOKEN = "TOKEN";
my $uri = 'http://$YOUR_LOGSCALE_URL/graphql';
my $json = '{"query" : "mutation {
setDynamicConfig(input: { config: GroupDefaultLimit, value: \"30000\" })
}"
}';
my $req = HTTP::Request->new("POST", $uri );
$req->header("Authorization" => "Bearer $TOKEN");
$req->header("Content-Type" => "application/json");
$req->content( $json );
my $lwp = LWP::UserAgent->new;
my $result = $lwp->request( $req );
print $result->{"_content"},"\n";
#! /usr/local/bin/python3
import requests
url = 'http://$YOUR_LOGSCALE_URL/graphql'
mydata = r'''{"query" : "mutation {
setDynamicConfig(input: { config: GroupDefaultLimit, value: \"30000\" })
}"
}'''
resp = requests.post(url,
data = mydata,
headers = {
"Authorization" : "Bearer $TOKEN",
"Content-Type" : "application/json"
}
)
print(resp.text)
const https = require('https');
const data = JSON.stringify(
{"query" : "mutation {
setDynamicConfig(input: { config: GroupDefaultLimit, value: \"30000\" })
}"
}
);
const options = {
hostname: 'http://$YOUR_LOGSCALE_URL/graphql',
path: '/graphql',
port: 443,
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Content-Length': data.length,
Authorization: 'BEARER ' + process.env.TOKEN,
'User-Agent': 'Node',
},
};
const req = https.request(options, (res) => {
let data = '';
console.log(`statusCode: ${res.statusCode}`);
res.on('data', (d) => {
data += d;
});
res.on('end', () => {
console.log(JSON.parse(data).data);
});
});
req.on('error', (error) => {
console.error(error);
});
req.write(data);
req.end();
{
"data" : {
"setDynamicConfig" : true
}
}