LogScale Configuration Parameters

Below is an alphabetical list of all of the Configuration Parameters (environment variables) used to configure LogScale on your Infrastructure. These are parameters that are exclusively related to LogScale software, as well as options that are related to other systems that integrate with LogScale (e.g., Amazon AWS, Google Cloud). Click on the name of a variable below for more details on it, along with related and similar options.

Use a simple text editor to open the LogScale configuration file, server.conf in the /etc/humio directory, and to change any of these variables on an installation of LogScale software. Once you've finished making changes, be sure to restart LogScale, depending on how you deployed it, for the server — or for each node affected in a cluster, if not all nodes.

Table: Configuration Parameters Table

VariableDefault ValueAvailabilityDescription
ACTION_LINK_BASE_URLPUBLIC_URL  Sets the base URL used in links sent from Actions
AGGREGATE_ALERT_MAX_THROTTLE_FIELD_VALUES_STORED100introduced in 1.143.0 Maximum number of field values stored for each aggregate alert
AGGREGATE_ALERTS_MAX_CATCH_UP_LIMIT24hintroduced in 1.143.0 Sets how long back aggregate alerts will be able to catch up with delays, expressed using Relative Time Syntax. While an aggregate alert is catching up, it will not react to new events, so if a single event is causing the alert or the associated action(s) to fail, the alert will not trigger until that event is outside the catch up limit. For more information on what aggregate alerts are, see Aggregate Alerts.
ALERT_DESPITE_WARNINGSfalse  Alerts are activated even with warnings from the alert query
ALERT_DISCLAIMER   Disclaimer to notify that alerts are sent from a given view or repository
ALERT_MAX_THROTTLE_FIELD_VALUES_STORED100  Maximum number of field values stored for each standard alert
ALLOW_CHANGE_REPO_ON_EVENTSfalse  HEC allows ingest to any specified repository
ALLOW_XML_DOCTYPE_DECLfalse  ALLOW_XML_DOCTYPE_DECL Environment Variable
API_EXPLORER_ENABLEDtrue  Enables or disables the API GraphQL Explorer functionality (see Accessing GraphQL using API Explorer).
AUDITLOG_SENSITIVE_RETENTION_DAYS200 * 365 days  Specifies when sensitive logs are deleted by retention in humio-audit repository
AUTH_ALLOW_SIGNUPtrue  AUTH_ALLOW_SIGNUP Environment Variable
AUTH_BY_PROXY_HEADER_NAMEnone  Specifies usernames in header for the proxy
AUTHENTICATION_METHODsingle-user  Enables a standard LDAP bind method
AUTO_CREATE_USER_ON_SUCCESSFUL_LOGINfalse  Automatically creates users in LogScale if they logged in with external authentication methods
AUTO_UPDATE_GROUP_MEMBERSHIPS_ON_SUCCESSFUL_LOGINfalse  Allows to transfer group membership rules at login
AUTO_UPDATE_IP_LOCATION_DBtruedeprecated in 1.19.0 Deprecated and replaced by AUTO_UPDATE_MAXMIND
AUTO_UPDATE_MAXMINDtrue  Enables automatic update of MaxMind GeoLite2 database
AUTOSHARDING_CHECKINTERVAL_MS30,000 msremoved in 1.152.0 Sets the increase interval of the delay triggered for auto-sharding
AUTOSHARDING_MAX1,024 shards  Sets the number of datasources created during auto-sharding.
AUTOSHARDING_TRIGGER_DELAY_MS14,400,000 msremoved in 1.152.0 Sets the delay in ms to trigger auto-sharding in case of high load
AWS_ACCESS_KEY_ID   Sets the access key for AWS
AWS_SECRET_ACCESS_KEY   Sets the secret access key for AWS
BACKUP_DIRhumio-backupdeprecated in 1.57.0 Specifies the directory where to write a backup of the data files
BACKUP_KEYdeveloper  Specifies the secret key used for encryption for data files backup
BACKUP_NAMEnonedeprecated in 1.57.0 Names the backup of the data files
BITBUCKET_OAUTH_CLIENT_IDnone  The Key from your BitBucket OAuth Consumer
BITBUCKET_OAUTH_CLIENT_SECRETnone  The Secret from your BitBucket OAuth Consumer
BOOTSTRAP_HOST_ID0  Sets an ID for the server at first start up
BOOTSTRAP_HOST_UUID_COOKIEnone  Sets a unique identifier of the local filesystem contents
BOOTSTRAP_ROOT_TOKEN_HASHED  Specifies the hashed root token for a LogScale instance
BUCKET_STORAGE_IGNORE_ETAG_UPLOADfalse  For bucket storage to work with MinIO, disables checksum matching while uploading the file
BUCKET_STORAGE_MULTIPLE_ENDPOINTSfalse  Proxy configuration applied to all bucket storage backends or not
BUCKET_STORAGE_SSE_COMPATIBLE   Makes bucket storage not verify checksums of raw objects after uploading to an S3
CACHE_STORAGE_DIRECTORYnone  Enables a local cache of segment files
CACHE_STORAGE_PERCENTAGE90  Enable caching of files from a slow network file system (EBS) or for a file system on spinning disks
COMPRESSION_TYPEhigh  Sets default compression levels for segments and minisegments
COOKIE_DOMAIN   Sets the domain when configuring session cookies
COOKIE_PATH   Indicates a URL path that must exist in the requested URL in order to send the cookie header
COOKIE_SAMESITE   Sets whether the cookie should be restricted to first-party or same-site context
COOKIE_SECURE   Indicates that the cookie is sent to the server only when the request is made with the https: scheme
CORESAvailable Processors  Specifies the number of CPU cores for the machine running LogScale
CORS_ALLOWED_ORIGINStrue  Websites or IP addresses that allow Cross-Origin Resource Sharing
CREATE_HUMIO_SEARCH_ALLfalse  Allows creation of humio-search-all view
DAYS_BEFORE_TOMBSTONE_DELETION14  Sets the restorability of deleted repositories or views
DEFAULT_ALLOW_REBALANCE_EXISTING_SEGMENTStrue  Sets whether or not the existing segment decider will run
DEFAULT_ALLOW_UPDATE_DESIRED_DIGESTERStrue  Enables automatic management of the digest partition table
DEFAULT_DIGEST_REPLICATION_FACTOR3  Allows configuration of the replication factor used for the digest partitions table
DEFAULT_GROUPS   List of default groups that users belong to
DEFAULT_SEGMENT_REPLICATION_FACTOR1  Sets the number of replicas each segment file will have.
DELETE_BACKUP_AFTER_MILLIS604,800,000 ms  Configures when data files backup must be deleted
DELETE_ON_INGEST_QUEUEtrue  Deletes events from the ingest queue
DIGEST_EXECUTOR_CORESCORES Divided by 2  Internal configuration to half the number of CPU cores set in CORES variable
DIGEST_REPLICATION_FACTOR   Sets the replication factor for digest
DIRECTORYhumio-data  Data directory for LogScale
DISABLE_BUCKET_CLEANING_TMP_FILESfalse  Allows turning off cleaning of files in bucket storage temporary file directories
DUMP_THREADS_SECONDS   Specifies the interval thread dumps are written with
ELASTIC_PORT   Sets the port for ElasticSearch bulk endpoint
EMAIL_ACTION_DISCLAIMER   Disclaimer in every email to clarify alerts or scheduled searches are sent as LogScale actions
EMERGENCY_USERSfalse  Enables emergency users in case of issues with identity provider
ENABLE_AGGREGATE_ALERTStrueintroduced in 1.143.0 Enables/disables aggregate alerts
ENABLE_ALERTStrue  Enables/disables all alerts
ENABLE_BEARER_TOKEN_AUTHORIZATIONfalse  Using less secure bearer token instead of secure cookies
ENABLE_EVENT_FORWARDINGfalse  Enables/disables event forwarding
ENABLE_FDR_POLLING_ON_NODEtrue  Enables polling and ingest of FDR data on the LogScale node
ENABLE_FILTER_ALERTStrue  Enables/disables filter alerts
ENABLE_INGEST_FEEDtrue  Enables/disables ingest feeds on the given node. This may vary between nodes in the cluster such that only some nodes run ingest feeds.
ENABLE_PERSONAL_API_TOKENStrue  Enables/disables use of personal API tokens
ENABLE_QUERY_LOAD_BALANCINGtrue  Allows queries to execute locally on the node that receives the requests
ENABLE_SANDBOXEStrue  Enables/disables sandbox repositories
ENABLE_SCHEDULED_SEARCHESfalse  Sets whether scheduled searches should be executed
ENABLEINTERNALLOGGERtrue  ENABLEINTERNALLOGGER Environment Variable
ENFORCE_AUDITABLEfalse  Sets permissions and enforce Auditable mode for root access
EXTERNAL_URLhttp://localhost:PORT  URL that other hosts can use to reach this server
EXTRA_KAFKA_CONFIGS_FILE   Allows to add extra Kafka configuration properties
FDR_USE_PROXY   Makes the FDR job use the proxy settings specified with HTTP_PROXY_* environment variables
FDR_VISIBILITY_TIMEOUT15 m  Visibility timeout of SQS messages read by FDR integration
FILTER_ALERT_MAX_EMAIL_TRIGGER_LIMIT15 triggers/minute  Sets the trigger limit for filter alerts having emails attached
FILTER_ALERT_MAX_NON_EMAIL_TRIGGER_LIMIT100 triggers/minute  Sets the trigger limit for filter alerts without email attached
FILTER_ALERT_MAX_THROTTLE_FIELD_VALUES_STORED100  Maximum number of field values stored for each filter alert
FILTER_ALERTS_MAX_CATCH_UP_LIMIT24h  Sets how long back filter alerts will be able to catch up with delays, expressed using Relative Time Syntax. Note that while a filter alert is catching up, it will not react to new events, so if a single event is causing the alert or the associated action(s) to fail, the alert will not trigger until that event is outside the catch up limit. For more information, see Filter Alerts.
FILTER_ALERTS_MAX_WAIT_FOR_MISSING_DATA24m  Sets how long filter alerts will wait for query warnings about missing data to disappear, expressed using Relative Time Syntax. If a filter alert is waiting for query warnings to disappear for longer than 15 minutes, the alert will not react to new events. If the query warning is permanent, the alert will not trigger until the whole waiting time has passed. For more information, see Filter Alerts.
FLUSH_BLOCK_SECONDS900 seconds  How long a mini-segment can stay open
FORWARDING_BREAKER_EXP_BACKOFF_FACTOR2.0  Increase reset time after each new failure
FORWARDING_BREAKER_FAILURES50  Failures before stopping all events in event forwarding
FORWARDING_BREAKER_MAX_RESET60 seconds  Max reset time in event forwarding
FORWARDING_BREAKER_RESET1 second  Awaiting time before a new event in event forwarding
FORWARDING_BREAKER_TIMEOUT10  Timeout before a call is considered a failure
FORWARDING_MAX_CONCURRENCY50,000  Max number of events waiting to be forwarded
GC_KILL_THRESHOLD_MILLIS   Threshold for timeSpentOnGC that makes LogScale exit when exceeded
GCP_EXPORT_WORKLOAD_IDENTITY   Uses Workload Identity for exporting to bucket of query results
GCP_STORAGE_BUCKET   Sets the name of the bucket to use
GCP_STORAGE_CONCURRENCYcores/2  The number of concurrent downloading/uploading files in GCP storage
GCP_STORAGE_ENCRYPTION_KEY   Sets the encryption key of the bucket to use
GCP_STORAGE_OBJECT_KEY_PREFIX   Allows nodes to share a bucket
GCP_STORAGE_USE_HTTP_PROXYtrue  Enables/disables HTTP proxy for communicating with Google Cloud Bucket Storage
GCP_STORAGE_WORKLOAD_IDENTITY   Uses Workload Identity for bucket storage
GITHUB_OAUTH_CLIENT_ID   GITHUB_OAUTH_CLIENT_ID Environment Variable
GITHUB_OAUTH_CLIENT_SECRET   GITHUB_OAUTH_CLIENT_SECRET Environment Variable
GLOB_ALLOW_LIST_EMAIL_ACTIONSAllow all  Blocks recipients of email actions that are not in the provided allow list.
GLOB_MATCH_LIMIT20,000  Sets the maximum number of rows for csv_file in match() function
GLOBAL_THROTTLE_PERCENTAGE20  Percentage of time allowed for a global publishing thread before other transactions of that type are throttled
GOOGLE_OAUTH_CLIENT_ID   The client_id from your Google OAuth App
GOOGLE_OAUTH_CLIENT_SECRET   The client_secret your GitHub OAuth App
HTTP_PROXY_ALLOW_ACTIONS_NOT_USEfalse  Allows actions not to use HTTP proxy
HTTP_PROXY_ALLOW_NOTIFIERS_NOT_USEfalsedeprecated in 1.19.0 Configures alert notifiers not to use HTTP proxy
HTTP_PROXY_HOST   Configures the HTTP proxy host used by connections from LogScale
HTTP_PROXY_PASSWORD   Sets the password for HTTP proxy configuration
HTTP_PROXY_PORT3129  Sets the port for HTTP proxy configuration
HTTP_PROXY_USERNAME   Sets the username for HTTP proxy configuration
HUMIO_HTTP_BINDHUMIO_SOCKET_BIND  IP to bind the http listening socket to
HUMIO_JVM_ARGS removed in 1.147.99 Allows supplement or tune JVM with LogScale running
HUMIO_KAFKA_TOPIC_PREFIX   Adds a prefix to the topic names in Kafka
HUMIO_LOG4J_CONFIGURATION   Sets the path for the log4j2-custom-config file
HUMIO_PORT   Sets the TCP port to listen for HTTP traffic
HUMIO_SOCKET_BIND0.0.0.0  Sets the IP address to bind the UDP/TCP/HTTP listening sockets
IDLE_POLL_TIME_BEFORE_DASHBOARD_QUERY_IS_CANCELLED_MINUTES4320 minutes  Time in minutes dashboard queries keep running when not polled
INGEST_FEED_AWS_ACCESS_KEY_ID   Access key ID to use when ingesting from AWS-based ingest feeds. Optional, if not set then we attempt tot use the default credentials provider chain.
INGEST_FEED_AWS_CREDENTIALS_PROVIDER_USE_PROXYtrue  Defines if a proxy will be used when provisioning AWS credentials.
INGEST_FEED_AWS_DOWNLOAD_MAX_OBJECT_SIZE_DEFAULT2,147,483,648 bytes  Max size of objects downloaded from S3.
INGEST_FEED_AWS_PROCESSING_DOWNLOAD_BUFFER_SIZE_DEFAULT8,388,608 bytes  Size of the buffer when downloading.
INGEST_FEED_AWS_PROCESSING_EVENT_BUFFER_SIZE_DEFAULT1 MB  Size of the buffer after preprocessing and splitting into individual events.
INGEST_FEED_AWS_PROCESSING_EVENTS_PER_BATCH_DEFAULT1,000 events  Number of events ingested per batch.
INGEST_FEED_AWS_REGION   Specify the region of the AWS Bucket, This is not be necessary if the value is supplied through the credentials provider chain.
INGEST_FEED_AWS_ROLE_ARN   Specify the ARN of S3 bucket on AWS.
INGEST_FEED_GOVERNOR_GAIN_PER_CORE_DEFAULT100,000  Change in rate when under/over the setpoint. Increasing this will make the governor more aggressive in changing the ingest rate.
INGEST_FEED_GOVERNOR_INGEST_DELAY_HIGH_DEFAULT10 seconds 

Default ingest delay high setpoint for the ingest feed governor.

INGEST_FEED_GOVERNOR_INGEST_DELAY_LOW_DEFAULT5 seconds 

Default ingest delay low setpoint for the ingest feed governor.

INGEST_FEED_GOVERNOR_INITIAL_RATE_PER_CORE8,000 bytes per seconds per core  The initial rate of ingest allowed by the governor in bytes per second per core.
INGEST_FEED_POLL_PERIOD60 seconds  Specify the amount of time between successive polls of ingest feeds without pressure. In case of pressure the ingest feeds will be polled as soon as possible.
INGEST_FEED_SECRET_ACCESS_KEYsee description  Secret access key to use when ingesting from AWS-based ingest feeds. Optional, if not set then we attempt to use the default credentials provider chain.
INGEST_QUEUE_REPLICATION_FACTOR2  Replication factor for the Kafka ingest queue
INITIAL_DISABLED_NODE_TASKempty  Enables/disables node tasks
INITIAL_FEATURE_FLAGSempty  Configures feature flags within LogScale
IOC_CROWDSTRIKE_API_CLIENT_ID   Sets the client ID for CrowdStrike Intel API
IOC_CROWDSTRIKE_API_CLIENT_SECRET   Sets the client secret for CrowdStrike Intel API
IOC_CROWDSTRIKE_API_URL   CrowdStrike API server URL for IOCs database download
IOC_UPDATE_SERVER_URLhttps://ioc.humio.com  API server URL for IOCs database download
IOC_USE_HTTP_PROXYtrue  Allows to choose HTTP_PROXY for IOCs database update
IP_FILTER_ACTIONS   IP-based access control list (ACL) for outgoing connections made by actions. Replaces IP_FILTER_NOTIFIERS
IP_FILTER_NOTIFIERS   IP-based access control list (ACL) for outgoing connections made by notifiers. Replaced by IP_FILTER_NOTIFIERS
IP_FILTER_RDNS  IP filter for filtering which IP addresses may be queried with the rdns() function.
IP_FILTER_RDNS_SERVER  IP filter for filtering which DNS servers may be specified in the rdns() function.
JWKS_REFRESH_INTERVAL3,600,000  JWKS_REFRESH_INTERVAL Environment Variable
KAFKA_CLIENT_RACK   Specifies the client.rack value directly.
KAFKA_CLIENT_RACK_ENV_VARZONE  Finds the name of the variable that holds the value of client.rack.
KAFKA_MANAGED_BY_HUMIOtrue  Set/unset LogScale to create topics and manage replicas in Kafka
KAFKA_SERVERS   Kafka bootstrap servers list
LDAP_AUTH_PRINCIPAL   Allows to transform LogScale login usernames so to enable LDAP authentication
LDAP_AUTH_PRINCIPALS_REGEX   Separates multiple patterns with users in more locations within LDAP
LDAP_AUTH_PROVIDER_CERT   Specifies the PEM-format value of the certificate required when enabling TLS/SSL-secured communications to the LDAP server
LDAP_AUTH_PROVIDER_URL   The URL to connect to for LDAP authentication
LDAP_DOMAIN_NAME   Allows users to login with their username and not domain name
LDAP_GROUP_BASE_DN   The query to perform to get the user's groups for LDAP
LDAP_GROUP_FILTER   LDAP_GROUP_FILTER Environment Variable
LDAP_GROUP_SEARCH_BIND_FOR_LOOKUPfalse  LDAP_GROUP_SEARCH_BIND_FOR_LOOKUP Environment Variable
LDAP_GROUPNAME_ATTRIBUTE   Allows using an alternate attribute on the group record in LDAP as the group name in LogScale RBAC configuration
LDAP_SEARCH_BASE_DN   Sets the base DN for LDAP-Search authentication method
LDAP_SEARCH_DOMAIN_NAME   LDAP_SEARCH_DOMAIN_NAME Environment Variable
LDAP_SEARCH_FILTER   LDAP_SEARCH_FILTER Environment Variable
LDAP_USERNAME_ATTRIBUTE   Allows choosing some attribute in the LDAP user record as the username in LogScale
LDAP_VERBOSE_LOGGINGfalse  LDAP_VERBOSE_LOGGING Environment Variable
LIVEQUERY_CANCEL_COST_PERCENTAGE10  Backlog allowed before canceling the queries with the highest cost
LIVEQUERY_CANCEL_TRIGGER_DELAY_MS20,000 ms  Sets cancelling of the most consuming live queries
LOCAL_STORAGE_MIN_AGE_DAYS   Minimum number of days to keep a fresh segment file before it is deleted locally
LOCAL_STORAGE_PERCENTAGE85 %  Sets a limit to the percentage of disk full
LOCAL_STORAGE_PREFILL_PERCENTAGE70% of LOCAL_STORAGE_PERCENTAGE  Configures eager prefilling of disks from bucket storage
MAX_BUCKET_POINTS10,000  MAX_BUCKET_POINTS Environment Variable
MAX_CHARS_TO_FIND_TIMESTAMP   Sets the number of characters searched by the findTimestamp() function
MAX_CONCURRENT_EXPORTS_PER_VIEW10  MAX_CONCURRENT_EXPORTS_PER_VIEW Environment Variable
MAX_DISTINCT_TAG_VALUES1,000  Allows auto-grouping of tags
MAX_EVENT_FIELD_COUNT1,000 fields  Sets the enforced maximum number of fields in an event in the ingest phase
MAX_EVENT_FIELD_COUNT_IN_PARSER50,000 fields  Specifies the number of fields allowed within the parser
MAX_EVENT_SIZE1 MiB  Specifies the maximum allowed event size
MAX_FILEUPLOAD_SIZE104,857,600 bytes  Specifies the maximum size of uploaded files.
MAX_GRAPHQL_QUERY_DEPTH11  MAX_GRAPHQL_QUERY_DEPTH Environment Variable
MAX_HOURS_SEGMENT_OPEN24 hours  The maximum number of hours a merge target will remain open for writing before being closed.
MAX_INGEST_DELAY_SECONDS3,600 seconds  Events backlog allowed before LogScale starts responding on http interfaces
MAX_INGEST_REQUEST_SIZE33,554,432 bytes  Size limit of ingest requests after content-encoding has been applied.
MAX_INTERNAL_STATESIZE128, HUMIO_MEMORY_OPTS / 1024) MiB  Sets the size of query states
MAX_JITREX_BACKTRACK1,000  Limits CPU resources spent in a regex match
MAX_JOIN_LIMIT200,000 rows  Sets the limit parameter of the join() function.
MAX_NUMBER_OF_GLOBALDATA_DUMPS_TO_KEEP20 data dumps  Maximum number of global data dumps
MAX_SECS_WAIT_FOR_SYNC_WHEN_CHANGING_DIGEST_LEADER5 minutes  Specifies when digest coordination will permit a node that is not in sync
MAX_SERIES_LIMIT500 series  Determines the max amount of series in a bucket and/or timechart.
MAX_SERIES_MEMLIMIT   Determines the maximum memory for a series in a bucket and/or timechart.
MAXMIND_ACCOUNT_IDtrue  Sets automatic update of MaxMind IP location database
MAXMIND_BASE_URL   Enables to change the base path to download MaxMind from
MAXMIND_EDITION_ID deprecated in 1.19.0 Deprecated, replaced by MAXMIND_IP_LOCATION_EDITION_ID
MAXMIND_IP_LOCATION_EDITION_ID   Allows to use an alternative MaxMind database for IP location information (optional)
MAXMIND_LICENSE_KEY   Where to specify the license key for your account if you have a MaxMind license
MERGE_TARGET_RETENTION_PERCENTAGE3.33 %  The minimum desired merge-result segments based on retention size/time.
MINI_SEGMENT_MAX_MERGE_DELAY_MS_BEFORE_WARNING48 ms  Logs a warning if mini segment is not merged
MINISEGMENT_PREMERGE_MAX_BLOCKSnumber of blocks in normal mini segments  Configures how many blocks are allowed in the merge result from merging mini segments into larger mini segments.
MINISEGMENT_PREMERGE_MIN_FILES12 minisegments  Minimum number of mini segments that must go into a merge.
NODE_ROLESall  Select the logical roles for a node within the LogScale cluster
OIDC_USE_HTTP_PROXYtrue  Whether to use the HTTP proxy for calling OIDC
OIDC_AUDIENCE   Audience to expect in a JWT
OIDC_AUTHORIZATION_ENDPOINT   URL to endpoint user is redirected to when authorizing
OIDC_CACHE_USERINFO_MS600,000 ms  How long user info is cached on a LogScale node
OIDC_GROUPS_CLAIMhumio-groups  Claim name to interpret as the groups in LogScale
OIDC_JWKS_URI   URL to JWKS endpoint for keys to validate tokens
OIDC_OAUTH_CLIENT_ID   Client ID of OpenID application
OIDC_OAUTH_CLIENT_SECRET   Client secret of OpenID application
OIDC_PROVIDER   URL to the OpenID Connect provider
OIDC_SCOPES   OIDC_SCOPES Environment Variable
OIDC_SERVICE_NAMEOpenID Connect  OIDC provider name displayed at sign in
OIDC_TOKEN_ENDPOINT   URL to token endpoint used to exchange authentication code to an access token
OIDC_TOKEN_ENDPOINT_AUTH_METHOD   Authorization method for a token endpoint
OIDC_USERINFO_ENDPOINT   URL to user info endpoint to retrieve user information from an access token
OIDC_USERNAME_CLAIMhumio-user  Name of the claim to interpret as username in LogScale
ONLY_CREATE_USER_IF_SYNCED_GROUPS_HAVE_ACCESSfalse  Configures whether users are created if synced groups have access to>sandbox and sys repositories
POSTMARK_FROM   Send emails using the Postmark service
POSTMARK_SERVER_SECRET   Sets the values for your server's token when using the Postmark service
PRIMARY_STORAGE_MAX_FILL_PERCENTAGE   Primary segment files' storage limit
PRIMARY_STORAGE_PERCENTAGE   Primary segment files' storage limit
PROMETHEUS_METRICS_PORT   Enables Prometheus to scrape metrics from LogScale
PUBLIC_URL   Public URL where LogScale instance is reachable from a browser
QUERY_CACHE_MIN_COST1,000L  Enables/disables caching when using features that store a copy of live search results to the local disk
QUERY_COORDINATORtruedeprecated in 1.119.0 Sets whether the current node should act as a query coordinator
QUERY_EXECUTOR_CORES   Sets the number of CPU cores to reduce pressure on context switching due to hyper-threading
RDNS_DEFAULT_SERVER  Default server to use for reverse DNS queries using rdns() function
READ_GROUP_PERMISSIONS_FROM_FILEfalse  Allows groups and roles to be converted to new RBAC model and visible under Administration in read-only
S3_ARCHIVING_ACCESSKEY   Sets the S3 access keys for archiving ingested logs in export format
S3_ARCHIVING_ENDPOINT_BASE   Allows to point to a non-AWS endpoint for archiving
S3_ARCHIVING_SECRETKEY   Sets the S3 secret key for archiving of ingested logs in an export format
S3_ARCHIVING_USE_HTTP_PROXYtrue  Whether to use the globally configured HTTP proxy for communicating with S3
S3_ARCHIVING_WORKERCOUNT1  Sets the number of parallel workers for upload
S3_EXPORT_USE_HTTP_PROXYtrue  Enables/disables HTTP proxy configured for exporting to Amazon S3
S3_RECOVER_FROM_KMS_KEY_ARN   Arn to the KMS key when using server side encryption on a recovery bucket
S3_STORAGE_2_KMS_KEY_ARN   ARN to the KMS key when using server side encryption on a 2nd bucket
S3_STORAGE_ACCESSKEY   Sets the access key for S3 storage
S3_STORAGE_BUCKET   Bucket storage S3 variant
S3_STORAGE_CONCURRENCYcores/2  The number of concurrent downloading/uploading files in S3 storage
S3_STORAGE_ENCRYPTION_KEY   Sets the encryption key for S3 storage
S3_STORAGE_ENDPOINT_BASE   Sets the URL for pointing to your own non-AWS endpoint for S3 storage
S3_STORAGE_KMS_KEY_ARN   ARN to the KMS key when using server side encryption on a bucket
S3_STORAGE_OBJECT_KEY_PREFIX   Sets the optional prefix for all object keys
S3_STORAGE_PREFERRED_COPY_SOURCEfalse  Sets how to download segments from bucket storage when prefetching
S3_STORAGE_REGION   S3_STORAGE_REGION Environment Variable
S3_STORAGE_SECRETKEY   Sets Secret Key for S3 bucket storage
S3_STORAGE_USE_HTTP_PROXYtrue  Enables/disables HTTP proxy for communicating with Amazon Bucket Storage
SAML_ALTERNATIVE_IDP_CERTIFICATE   Provides an alternative certificate for authentication
SAML_DEBUGfalse  SAML_DEBUG Environment Variable
SAML_GROUP_MEMBERSHIP_ATTRIBUTE   Synchronizes the groups upon successful login in LogScale
SAML_IDP_CERTIFICATE   Provides a certificate for authentication
SAML_IDP_ENTITY_ID   IDP identifier used internally in the authentication flow
SAML_IDP_SIGN_ON_URL   User accessing LogScale is redirected to this variable and authentication flow starts
SAML_USER_ATTRIBUTE   Allows to set a different user attribute name
SANGRIA_LOG_SLOW_MILLIS   SANGRIA_LOG_SLOW_MILLIS Environment Variable
SCHEDULED_SEARCH_BACKFILL_LIMIT5  Configures the global maximum backfill limit for scheduled searches
SCHEDULED_SEARCH_DESPITE_WARNINGSfalse  Configures actions trigger in schedules searches in case of warnings
SCHEDULED_SEARCH_MAX_WAIT_FOR_MISSING_DATAfalse  Sets the maximum time a schedule search will be retried in case of missing data warnings
SECONDARY_DATA_DIRECTORY   Enables a secondary file system to store segment files
SECONDARY_STORAGE_MAX_FILL_PERCENTAGE   Sets the limit for secondary storage in percentage
SEND_USER_INVITEStrue  Sets whether to send email invitations
SHARED_DASHBOARDS_ENABLEDtrue  Allows to disable shared dashboards
SHUTDOWN_ABORT_FLUSH_TIMEOUT_MILLIS30,000 ms  How long the digest worker thread keeps working on flushing the contents of in-memory buffers at shutdown
SINGLE_USER_PASSWORD   Sets the password for single-user authentication mode
SINGLE_USER_USERNAMEuser  Sets the username for single-user authentication mode
SMTP_HOST   Allows to send emails using an SMTP server
SMTP_PASSWORD   Sets the secret password when using an SMTP server for emails
SMTP_PORT   Sets the port number when using an SMTP server for emails
SMTP_SENDER_ADDRESS   Sets your sender address when using an SMTP server for emails
SMTP_USE_STARTTLS   Enables/disables StartTLS when using an SMTP server for emails
SMTP_USERNAME   Sets your username when using an SMTP server for emails
STATIC_IMAGE_CONTENT_URL   Allows note widgets to display images from the configured URL
STORAGE_REPLICATION_FACTOR   Sets the replication factor for storage
STREAMING_QUERY_KEEPALIVE_NEWLINESfalse  Whether to emit a newline into streaming query responses
STREAMING_QUERY_KEEPALIVE_NEWLINES_ON_NODESfalse  Whether to emit a newline into streaming query responses for internal requests
STREAMING_QUERY_KEEPALIVE_TIMEOUTunset  The keep-alive duration to set on HTTP responses for streaming queries
TABLE_CACHE_MAX_STORAGE_FRACTION0.001introduced in 1.148.0 Fraction of disk space allowed for caching file data used for query functions such as match() and readFile().
TABLE_CACHE_MAX_STORAGE_FRACTION_FOR_INGEST_AND_HTTP_ONLY0.1introduced in 1.148.0 Fraction of disk space allowed on ingest or httponly node for caching file data used for query functions such as match() and readFile().
TAG_HASHING_BUCKETS32  Used to support auto-grouping of tags
TCP_INGEST_MAX_TIMEOUT_SECONDS   Sets the timeout for TCP ingest listeners
THREAD_SIZE_LOGGING_INTERVAL_SECONDS   THREAD_SIZE_LOGGING_INTERVAL_SECONDS Environment Variable
TLS_CIPHER_SUITES   Used to set the allowed TLS protocols and cipher suites
TLS_CLIENT_ALIAS   Alias of the key in the keystore to use when a client request is made from other LogScale instances or to a webhook notifier
TLS_CLIENT_AUTHfalse  Whether to require TLS client authentication
TLS_DEFAULT_ALIAS   Alias of the key in the keystore to use when serving a client without an SNI extension header
TLS_HOSTNAME_VERIFICATION_FILTER   Whether to perform hostname verification
TLS_KEY_PASSWORD   The key password for TLS
TLS_KEYSTORE_TYPE   The type of keystore, either PKCS12 or JKS
TLS_PROTOCOLS   Sets the TLS protocols to allow when communicating
TLS_SERVER   Whether TLS should be used when serving the web interface
TLS_TRUSTSTORE_LOCATION   Path to the truststore
TLS_TRUSTSTORE_PASSWORD   Password to unlock the truststore, if any
TLS_TRUSTSTORE_TYPE   The type of truststore, either PKCS12 or JKS
TOP_K_MAX_MAP_SIZE_HISTORICAL32 * 1,024 bytes  TOP_K_MAX_MAP_SIZE_HISTORICAL Environment Variable
TOP_K_MAX_MAP_SIZE_LIVE8 * 1,024 bytes  TOP_K_MAX_MAP_SIZE_LIVE Environment Variable
TOPIC_MAX_MESSAGE_BYTES8,388,608 bytes  When LogScale is managing Kafka, overrides the default message size.
UI_AUTH_FLOWtrue  UI_AUTH_FLOW Environment Variable
UNSAFE_RELAX_FEDERATED_PROTOCOL_VERSION_CHECKfalse  Permits version discrepancy across clusters in Multi-Cluster Search, provided that the versions are compatible at the protocol level.
USING_EPHEMERAL_DISKSfalse  Whether to use ephemeral disks on all nodes
VALUE_DEDUP_LEVEL   Limits the CPU time spent on removing duplication of values
VERBOSE_AUTHfalse  VERBOSE_AUTH Environment Variable
WARN_ON_INGEST_DELAY_MILLIS120,000 ms  Warns when ingest is delayed
ZONE   When set, allows to spread spread partitions across the different zones