Limits and Standards

This page lists the various limits and standard operating parameters of LogScale. See Best Practice for the best practices relative to ingest via the Ingest API.

General Limits and Parameters

Below is a list of general limits and parameters:

Note

Unless otherwise specified, all multiple-byte data sizes are expressed in SI units using decimal (Base 10). For example:

  • 1KB = 1,000 Bytes

  • 1MB = 1,000,000 Bytes

  • 1GB = 1,000,000,000 Bytes

  • 1TB = 1,000,000,000,000 Bytes

Data Structure

LimitDefault ValueDynamic VariableConfig VariableAvailability
Max number of datasources in a repository.

10,000 datasources

   

Max number of fields in an event. During ingest, fields are sorted alphabetically by name and the first fields are parsed, the remainder of the named fields are dropped. The @rawstring field is not modified and will contain all data.

8,000

 

MAX_EVENT_FIELD_COUNT

Variable default: 8,000 fields

 

Max event size. When the configured event size max is reached, either in @rawstring and/or in other fields, the overall data will be truncated. Fields will be removed entirely, and @rawstring will be truncated down to the allowed max size with added ... at the end, such that the of all other fields + size of @rawstring is less than the configured max event size. Only @rawstring, @timestamp and @timezone are added when truncation occurs.

1 MB

 

MAX_EVENT_SIZE

Variable default: 1 MiB

 

Max CSV lookup file size (see Lookup Files).

200 MB

MaxCsvFileUploadSizeBytes

Variable default: 209,715,200 bytes (200 MB)

MAX_FILEUPLOAD_SIZE

Variable default: 104,857,600 bytes (100 MB)

 

Max JSON lookup file upload size (see Lookup Files).

100 MB

MaxJsonFileUploadSizeBytes

Variable default: 104,857,600 bytes (100 MB)

MAX_FILEUPLOAD_SIZE

Variable default: 104,857,600 bytes (100 MB)

 

Max file upload size (see Lookup Files)

2,048 MB

   

Query Related Limits

LimitDefault ValueDynamic VariableConfig VariableAvailability

Max memory a live query can consume during its execution.

LiveQueryMemoryLimit determines how much memory in bytes a live query can consume during its execution. For non-live queries, their memory limit is determined by QueryMemoryLimit, which is 100MB by default. By default LiveQueryMemoryLimit has the same value as QueryMemoryLimit.

1 GB

LiveQueryMemoryLimit

Variable default: 100,000,000 bytes

  
Max query length in characters

66,000 characters

   

Maximum memory usage that the query coordinator node can allocate during the execution of a query. The memory limits for static and live queries will be computed to 1/4 of the memory limit of the query coordinator. The limits of the static query state size and the live query state size are the state sizes on the workers.

This value allows cluster operators to effectively control how many concurrent queries are allowed on a system based on the hardware configuration. The lower the value, the more queries any given coordinator can run concurrently, since each query is allowed less of the available memory. The reason this then indirectly controls the memory allowed for each query state is that each coordinator node holds 4 copies of the query state / query result at any given point during the query execution.

This memory limit will, in turn, detemine the limits of the static query state size (QueryMemoryLimit) and the live query state size (LiveQueryMemoryLimit), which are computed based on the value of the QueryCoordinatorMemoryLimit configuration.

4GB

QueryCoordinatorMemoryLimit

Variable default: 4,000,000,000 bytes

  

Maximum amount of memory, in bytes, that a worker node can allocate to each historic/static query during its execution.

Since version 1.116, it cannot be configured directly: its value is set by the QueryCoordinatorMemoryLimitconfiguration. See also LiveQueryMemoryLimit for live queries.

1 GB

QueryMemoryLimit

Variable default: 100,000,000 bytes

  

Maximum amount of memory, in bytes, a historic/static query can consume during its execution.

See also LiveQueryMemoryLimit for live queries.

100,000,000 bytes

QueryMemoryLimit

Variable default: 100,000,000 bytes

  

Number of pipes that can be included in a query.

99 pipes

   

Max number of results that any query can give.

Increasing this value beyond the default limit may cause browser performance issues, including memory exhaustion or UI hanging when rendering large result sets.

1,000,000

StateRowLimit

Variable default: 200,000 rows

  

Maximum number of events processed in a given function. It is strongly recommended to not change this limit beyond its current default value as the consequences of doing so are cluster instability or cluster crashes. If you experience a lot of query load, particularly from the sort(), table(), head(), and tail() functions, decreasing this limit can help alleviate said load, as it lowers the amount of events those functions are allowed to collect.

20,000

StateRowLimit

Variable default: 200,000 rows

  

Maximum amount of memory, in bytes, that a worker node can allocate to each live query during its execution. It cannot be configured directly.

For non-live queries, their memory limit is determined by the QueryMemoryLimit. By default LiveQueryMemoryLimit has the same value as QueryMemoryLimit. From v 1.116, it cannot be configured directly: its value is set by the QueryCoordinatorMemoryLimit configuration.

1 GB

LiveQueryMemoryLimit

Variable default: 100,000,000 bytes

  

Function Limits

LimitDefault ValueDynamic VariableConfig VariableAvailability

Default number of events set in an RDNS request. See rdns() function.

5,000

RdnsDefaultLimit

Variable default: 5,000 events

  

Default value for the limit parameter in groupBy(), selfJoin() and some other functions, when not specified.

GroupDefaultLimit has a default value of either 20,000 or the value of the GroupMaxLimit configuration, whichever is smallest. GroupDefaultLimit cannot be larger than GroupMaxLimit.

20,000 rows

GroupDefaultLimit

Variable default: 20,000 group elements

  

Max value for the limit parameter in the groupBy() function, meaning that the function can collect up to one million groups.

Due to stability concerns this variable will not allow groupBy() to return the full million rows as a result when this function is the last aggregator: this is governed by the QueryResultRowCountLimit dynamic configuration.

1,000,000

GroupMaxLimit

Variable default: 1,000,000 group elements

  

Max number of rows that join() and selfJoin() can return.

Dynamic configuration JoinRowLimit replaces MAX_JOIN_LIMIT environment variable. If the value of MAX_JOIN_LIMIT has been modified, that is the default.

200,000 rows

JoinRowLimit

Variable default: 200,000 rows

  

Max number of events in tail(), head(), and sort() functions.

20,000 events

StateRowLimit

Variable default: 200,000 rows

  

Max number of events in an RDNS request. See rdns() function.

20,000

RdnsMaxLimit

Variable default: 20,000 events

  

Memory limit for the mapper phase of a collect() function running as a top-level function, that is, how much data such a function can store.

10 MiB

   

Memory limit for the mapper phase of a collect() function running in a subquery, or as a subaggregator to another function, that is, how much data such a function can store.

1 MiB

   

API Limits

LimitDefault ValueDynamic VariableConfig VariableAvailability
Limits for GraphQL queries on the total number of selected fields and fragments for authenticated users.

1,000

AuthenticatedGraphQLSelectionSizeLimit

  
Limits for GraphQL queries on the total number of selected fields and fragments for unauthenticated users.

150

UnauthenticatedGraphQLSelectionSizeLimit

Variable default: 150 queries

  
Max body size in POST requests after decompression.

32M bytes

   
Max body size in POST requests before decompression.

32M bytes

   

Standards

Below are some standards in LogScale.

Character sets

LogScale works with the JVM standard charsets. The character sets below are guaranteed to be supported in future JVM versions.

Note

These are the character sets LogScale accepts in the HTTP request and this may not necessarily be the same character sets accepted by the log shipper, such as Log Collector or another third-party log shipper, when ingesting files. When ingesting files, they are read by a log shipper that then makes HTTP requests to LogScale, and the character set supported by the log shipper is unlikely to have anything to do with what LogScale supports in the HTTP layer.

  • US-ASCII

  • ISO-8859-1

  • UTF-8

  • UTF-16

  • UTF-16BE

  • UTF-16LE

  • UTF-32

  • UTF-32BE

  • UTF-32LE

If you have any questions about whether a character set is accepted or supported, contact LogScale Support Team.