LogScale Configuration Parameters
Below is an alphabetical list of all of the Configuration Parameters (environment variables) used to configure LogScale on your Infrastructure. These are parameters that are exclusively related to LogScale software, as well as options that are related to other systems that integrate with LogScale (e.g., Amazon AWS, Google Cloud). Click on the name of a variable below for more details on it, along with related and similar options.
Use a simple text editor to open the LogScale configuration file,
server.conf
in the /etc/humio
directory, and to change any of these variables on an installation of
LogScale software. Once you've finished making changes, be sure to restart
LogScale, depending on how you deployed it, for the server — or for
each node affected in a cluster, if not all nodes.
Table: Configuration Parameters Table
Variable | Default Value | Availability | Description |
---|---|---|---|
ACTION_LINK_BASE_URL | PUBLIC_URL | Sets the base URL used in links sent from Actions | |
AGGREGATE_ALERT_MAX_THROTTLE_FIELD_VALUES_STORED | 100 | introduced in 1.143.0 | Maximum number of field values stored for each aggregate alert |
AGGREGATE_ALERTS_MAX_CATCH_UP_LIMIT | 24h | introduced in 1.143.0 | Sets how long back aggregate alerts will be able to catch up with delays, expressed using Relative Time Syntax. While an aggregate alert is catching up, it will not react to new events, so if a single event is causing the alert or the associated action(s) to fail, the alert will not trigger until that event is outside the catch up limit. For more information on what aggregate alerts are, see Aggregate Alerts. |
ALERT_DESPITE_WARNINGS | false | Alerts are activated even with warnings from the alert query | |
ALERT_DISCLAIMER | Disclaimer to notify that alerts are sent from a given view or repository | ||
ALERT_MAX_THROTTLE_FIELD_VALUES_STORED | 100 | Maximum number of field values stored for each standard alert | |
ALLOW_CHANGE_REPO_ON_EVENTS | false | HEC allows ingest to any specified repository | |
ALLOW_XML_DOCTYPE_DECL | false | ALLOW_XML_DOCTYPE_DECL Environment Variable | |
API_EXPLORER_ENABLED | true | Enables or disables the API GraphQL Explorer functionality (see Accessing GraphQL using API Explorer). | |
AUDITLOG_SENSITIVE_RETENTION_DAYS | 200 * 365 days | Specifies when sensitive logs are deleted by retention in humio-audit repository | |
AUTH_ALLOW_SIGNUP | true | AUTH_ALLOW_SIGNUP Environment Variable | |
AUTH_BY_PROXY_HEADER_NAME | none | Specifies usernames in header for the proxy | |
AUTHENTICATION_METHOD | single-user | Enables a standard LDAP bind method | |
AUTO_CREATE_USER_ON_SUCCESSFUL_LOGIN | false | Automatically creates users in LogScale if they logged in with external authentication methods | |
AUTO_UPDATE_GROUP_MEMBERSHIPS_ON_SUCCESSFUL_LOGIN | false | Allows to transfer group membership rules at login | |
AUTO_UPDATE_IP_LOCATION_DB | true | deprecated in 1.19.0 |
Deprecated and replaced by AUTO_UPDATE_MAXMIND
|
AUTO_UPDATE_MAXMIND | true | Enables automatic update of MaxMind GeoLite2 database | |
AUTOSHARDING_CHECKINTERVAL_MS | 30,000 ms | removed in 1.152.0 | Sets the increase interval of the delay triggered for auto-sharding |
AUTOSHARDING_MAX | 1,024 shards | Sets the number of datasources created during auto-sharding. | |
AUTOSHARDING_TRIGGER_DELAY_MS | 14,400,000 ms | removed in 1.152.0 | Sets the delay in ms to trigger auto-sharding in case of high load |
AWS_ACCESS_KEY_ID | Sets the access key for AWS | ||
AWS_SECRET_ACCESS_KEY | Sets the secret access key for AWS | ||
BACKUP_DIR | humio-backup | deprecated in 1.57.0 | Specifies the directory where to write a backup of the data files |
BACKUP_KEY | developer | Specifies the secret key used for encryption for data files backup | |
BACKUP_NAME | none | deprecated in 1.57.0 | Names the backup of the data files |
BITBUCKET_OAUTH_CLIENT_ID | none | The Key from your BitBucket OAuth Consumer | |
BITBUCKET_OAUTH_CLIENT_SECRET | none | The Secret from your BitBucket OAuth Consumer | |
BOOTSTRAP_HOST_ID | 0 | Sets an ID for the server at first start up | |
BOOTSTRAP_HOST_UUID_COOKIE | none | Sets a unique identifier of the local filesystem contents | |
BOOTSTRAP_ROOT_TOKEN_HASHED |
| Specifies the hashed root token for a LogScale instance | |
BUCKET_STORAGE_IGNORE_ETAG_UPLOAD | false | For bucket storage to work with MinIO, disables checksum matching while uploading the file | |
BUCKET_STORAGE_MULTIPLE_ENDPOINTS | false | Proxy configuration applied to all bucket storage backends or not | |
BUCKET_STORAGE_SSE_COMPATIBLE | Makes bucket storage not verify checksums of raw objects after uploading to an S3 | ||
CACHE_STORAGE_DIRECTORY | none | Enables a local cache of segment files | |
CACHE_STORAGE_PERCENTAGE | 90 | Enable caching of files from a slow network file system (EBS) or for a file system on spinning disks | |
COMPRESSION_TYPE | high | Sets default compression levels for segments and minisegments | |
COOKIE_DOMAIN | Sets the domain when configuring session cookies | ||
COOKIE_PATH | Indicates a URL path that must exist in the requested URL in order to send the cookie header | ||
COOKIE_SAMESITE | Sets whether the cookie should be restricted to first-party or same-site context | ||
COOKIE_SECURE | Indicates that the cookie is sent to the server only when the request is made with the https: scheme | ||
CORES | Available Processors | Specifies the number of CPU cores for the machine running LogScale | |
CORS_ALLOWED_ORIGINS | true | Websites or IP addresses that allow Cross-Origin Resource Sharing | |
CREATE_HUMIO_SEARCH_ALL | false | Allows creation of humio-search-all view | |
DAYS_BEFORE_TOMBSTONE_DELETION | 14 | Sets the restorability of deleted repositories or views | |
DEFAULT_ALLOW_REBALANCE_EXISTING_SEGMENTS | true | Sets whether or not the existing segment decider will run | |
DEFAULT_ALLOW_UPDATE_DESIRED_DIGESTERS | true | Enables automatic management of the digest partition table | |
DEFAULT_DIGEST_REPLICATION_FACTOR | 3 | Allows configuration of the replication factor used for the digest partitions table | |
DEFAULT_GROUPS | List of default groups that users belong to | ||
DEFAULT_SEGMENT_REPLICATION_FACTOR | 1 | Sets the number of replicas each segment file will have. | |
DELETE_BACKUP_AFTER_MILLIS | 604,800,000 ms | Configures when data files backup must be deleted | |
DELETE_ON_INGEST_QUEUE | true | Deletes events from the ingest queue | |
DIGEST_EXECUTOR_CORES | CORES Divided by 2 | Internal configuration to half the number of CPU cores set in CORES variable | |
DIGEST_REPLICATION_FACTOR | Sets the replication factor for digest | ||
DIRECTORY | humio-data | Data directory for LogScale | |
DISABLE_BUCKET_CLEANING_TMP_FILES | false | Allows turning off cleaning of files in bucket storage temporary file directories | |
DUMP_THREADS_SECONDS | Specifies the interval thread dumps are written with | ||
ELASTIC_PORT | Sets the port for ElasticSearch bulk endpoint | ||
EMAIL_ACTION_DISCLAIMER | Disclaimer in every email to clarify alerts or scheduled searches are sent as LogScale actions | ||
EMERGENCY_USERS | false | Enables emergency users in case of issues with identity provider | |
ENABLE_AGGREGATE_ALERTS | true | introduced in 1.143.0 | Enables/disables aggregate alerts |
ENABLE_ALERTS | true | Enables/disables all alerts | |
ENABLE_BEARER_TOKEN_AUTHORIZATION | false | Using less secure bearer token instead of secure cookies | |
ENABLE_EVENT_FORWARDING | false | Enables/disables event forwarding | |
ENABLE_FDR_POLLING_ON_NODE | true | Enables polling and ingest of FDR data on the LogScale node | |
ENABLE_FILTER_ALERTS | true | Enables/disables filter alerts | |
ENABLE_INGEST_FEED | true | Enables/disables ingest feeds on the given node. This may vary between nodes in the cluster such that only some nodes run ingest feeds. | |
ENABLE_PERSONAL_API_TOKENS | true | Enables/disables use of personal API tokens | |
ENABLE_QUERY_LOAD_BALANCING | true | Allows queries to execute locally on the node that receives the requests | |
ENABLE_SANDBOXES | true | Enables/disables sandbox repositories | |
ENABLE_SCHEDULED_SEARCHES | false | Sets whether scheduled searches should be executed | |
ENABLEINTERNALLOGGER | true | ENABLEINTERNALLOGGER Environment Variable | |
ENFORCE_AUDITABLE | false | Sets permissions and enforce Auditable mode for root access | |
EXTERNAL_URL | http://localhost:PORT | URL that other hosts can use to reach this server | |
EXTRA_KAFKA_CONFIGS_FILE | Allows to add extra Kafka configuration properties | ||
FDR_USE_PROXY | Makes the FDR job use the proxy settings specified with HTTP_PROXY_* environment variables | ||
FDR_VISIBILITY_TIMEOUT | 15 m | Visibility timeout of SQS messages read by FDR integration | |
FILTER_ALERT_MAX_EMAIL_TRIGGER_LIMIT | 15 triggers/minute | Sets the trigger limit for filter alerts having emails attached | |
FILTER_ALERT_MAX_NON_EMAIL_TRIGGER_LIMIT | 100 triggers/minute | Sets the trigger limit for filter alerts without email attached | |
FILTER_ALERT_MAX_THROTTLE_FIELD_VALUES_STORED | 100 | Maximum number of field values stored for each filter alert | |
FILTER_ALERTS_MAX_CATCH_UP_LIMIT | 24h | Sets how long back filter alerts will be able to catch up with delays, expressed using Relative Time Syntax. Note that while a filter alert is catching up, it will not react to new events, so if a single event is causing the alert or the associated action(s) to fail, the alert will not trigger until that event is outside the catch up limit. For more information, see Filter Alerts. | |
FILTER_ALERTS_MAX_WAIT_FOR_MISSING_DATA | 24m | Sets how long filter alerts will wait for query warnings about missing data to disappear, expressed using Relative Time Syntax. If a filter alert is waiting for query warnings to disappear for longer than 15 minutes, the alert will not react to new events. If the query warning is permanent, the alert will not trigger until the whole waiting time has passed. For more information, see Filter Alerts. | |
FLUSH_BLOCK_SECONDS | 900 seconds | How long a mini-segment can stay open | |
FORWARDING_BREAKER_EXP_BACKOFF_FACTOR | 2.0 | Increase reset time after each new failure | |
FORWARDING_BREAKER_FAILURES | 50 | Failures before stopping all events in event forwarding | |
FORWARDING_BREAKER_MAX_RESET | 60 seconds | Max reset time in event forwarding | |
FORWARDING_BREAKER_RESET | 1 second | Awaiting time before a new event in event forwarding | |
FORWARDING_BREAKER_TIMEOUT | 10 | Timeout before a call is considered a failure | |
FORWARDING_MAX_CONCURRENCY | 50,000 | Max number of events waiting to be forwarded | |
GC_KILL_THRESHOLD_MILLIS | Threshold for timeSpentOnGC that makes LogScale exit when exceeded | ||
GCP_EXPORT_WORKLOAD_IDENTITY | Uses Workload Identity for exporting to bucket of query results | ||
GCP_STORAGE_BUCKET | Sets the name of the bucket to use | ||
GCP_STORAGE_CONCURRENCY | cores/2 | The number of concurrent downloading/uploading files in GCP storage | |
GCP_STORAGE_ENCRYPTION_KEY | Sets the encryption key of the bucket to use | ||
GCP_STORAGE_OBJECT_KEY_PREFIX | Allows nodes to share a bucket | ||
GCP_STORAGE_USE_HTTP_PROXY | true | Enables/disables HTTP proxy for communicating with Google Cloud Bucket Storage | |
GCP_STORAGE_WORKLOAD_IDENTITY | Uses Workload Identity for bucket storage | ||
GITHUB_OAUTH_CLIENT_ID | GITHUB_OAUTH_CLIENT_ID Environment Variable | ||
GITHUB_OAUTH_CLIENT_SECRET | GITHUB_OAUTH_CLIENT_SECRET Environment Variable | ||
GLOB_ALLOW_LIST_EMAIL_ACTIONS | Allow all | Blocks recipients of email actions that are not in the provided allow list. | |
GLOB_MATCH_LIMIT | 20,000 | Sets the maximum number of rows for csv_file in match() function | |
GLOBAL_THROTTLE_PERCENTAGE | 20 | Percentage of time allowed for a global publishing thread before other transactions of that type are throttled | |
GOOGLE_OAUTH_CLIENT_ID | The client_id from your Google OAuth App | ||
GOOGLE_OAUTH_CLIENT_SECRET | The client_secret your GitHub OAuth App | ||
HTTP_PROXY_ALLOW_ACTIONS_NOT_USE | false | Allows actions not to use HTTP proxy | |
HTTP_PROXY_ALLOW_NOTIFIERS_NOT_USE | false | deprecated in 1.19.0 | Configures alert notifiers not to use HTTP proxy |
HTTP_PROXY_HOST | Configures the HTTP proxy host used by connections from LogScale | ||
HTTP_PROXY_PASSWORD | Sets the password for HTTP proxy configuration | ||
HTTP_PROXY_PORT | 3129 | Sets the port for HTTP proxy configuration | |
HTTP_PROXY_USERNAME | Sets the username for HTTP proxy configuration | ||
HUMIO_HTTP_BIND | HUMIO_SOCKET_BIND | IP to bind the http listening socket to | |
HUMIO_JVM_ARGS | removed in 1.147.99 | Allows supplement or tune JVM with LogScale running | |
HUMIO_KAFKA_TOPIC_PREFIX | Adds a prefix to the topic names in Kafka | ||
HUMIO_LOG4J_CONFIGURATION | Sets the path for the log4j2-custom-config file | ||
HUMIO_PORT | Sets the TCP port to listen for HTTP traffic | ||
HUMIO_SOCKET_BIND | 0.0.0.0 | Sets the IP address to bind the UDP/TCP/HTTP listening sockets | |
IDLE_POLL_TIME_BEFORE_DASHBOARD_QUERY_IS_CANCELLED_MINUTES | 4320 minutes | Time in minutes dashboard queries keep running when not polled | |
INGEST_FEED_AWS_ACCESS_KEY_ID | Access key ID to use when ingesting from AWS-based ingest feeds. Optional, if not set then we attempt tot use the default credentials provider chain. | ||
INGEST_FEED_AWS_CREDENTIALS_PROVIDER_USE_PROXY | true | Defines if a proxy will be used when provisioning AWS credentials. | |
INGEST_FEED_AWS_DOWNLOAD_MAX_OBJECT_SIZE_DEFAULT | 2,147,483,648 bytes | Max size of objects downloaded from S3. | |
INGEST_FEED_AWS_PROCESSING_DOWNLOAD_BUFFER_SIZE_DEFAULT | 8,388,608 bytes | Size of the buffer when downloading. | |
INGEST_FEED_AWS_PROCESSING_EVENT_BUFFER_SIZE_DEFAULT | 1 MB | Size of the buffer after preprocessing and splitting into individual events. | |
INGEST_FEED_AWS_PROCESSING_EVENTS_PER_BATCH_DEFAULT | 1,000 events | Number of events ingested per batch. | |
INGEST_FEED_AWS_REGION | Specify the region of the AWS Bucket, This is not be necessary if the value is supplied through the credentials provider chain. | ||
INGEST_FEED_AWS_ROLE_ARN | Specify the ARN of S3 bucket on AWS. | ||
INGEST_FEED_GOVERNOR_GAIN_PER_CORE_DEFAULT | 100,000 | Change in rate when under/over the setpoint. Increasing this will make the governor more aggressive in changing the ingest rate. | |
INGEST_FEED_GOVERNOR_INGEST_DELAY_HIGH_DEFAULT | 10 seconds |
Default ingest delay high setpoint for the ingest feed governor. | |
INGEST_FEED_GOVERNOR_INGEST_DELAY_LOW_DEFAULT | 5 seconds |
Default ingest delay low setpoint for the ingest feed governor. | |
INGEST_FEED_GOVERNOR_INITIAL_RATE_PER_CORE | 8,000 bytes per seconds per core | The initial rate of ingest allowed by the governor in bytes per second per core. | |
INGEST_FEED_POLL_PERIOD | 60 seconds | Specify the amount of time between successive polls of ingest feeds without pressure. In case of pressure the ingest feeds will be polled as soon as possible. | |
INGEST_FEED_SECRET_ACCESS_KEY | see description | Secret access key to use when ingesting from AWS-based ingest feeds. Optional, if not set then we attempt to use the default credentials provider chain. | |
INGEST_QUEUE_REPLICATION_FACTOR | 2 | Replication factor for the Kafka ingest queue | |
INITIAL_DISABLED_NODE_TASK | empty | Enables/disables node tasks | |
INITIAL_FEATURE_FLAGS | empty | Configures feature flags within LogScale | |
IOC_CROWDSTRIKE_API_CLIENT_ID | Sets the client ID for CrowdStrike Intel API | ||
IOC_CROWDSTRIKE_API_CLIENT_SECRET | Sets the client secret for CrowdStrike Intel API | ||
IOC_CROWDSTRIKE_API_URL | CrowdStrike API server URL for IOCs database download | ||
IOC_UPDATE_SERVER_URL | https://ioc.humio.com | API server URL for IOCs database download | |
IOC_USE_HTTP_PROXY | true | Allows to choose HTTP_PROXY for IOCs database update | |
IP_FILTER_ACTIONS | IP-based access control list (ACL) for outgoing connections made by actions. Replaces IP_FILTER_NOTIFIERS | ||
IP_FILTER_NOTIFIERS | IP-based access control list (ACL) for outgoing connections made by notifiers. Replaced by IP_FILTER_NOTIFIERS | ||
IP_FILTER_RDNS |
| IP filter for filtering which IP addresses may be queried with the rdns() function. | |
IP_FILTER_RDNS_SERVER |
| IP filter for filtering which DNS servers may be specified in the rdns() function. | |
JWKS_REFRESH_INTERVAL | 3,600,000 | JWKS_REFRESH_INTERVAL Environment Variable | |
KAFKA_CLIENT_RACK |
Specifies the client.rack value directly.
| ||
KAFKA_CLIENT_RACK_ENV_VAR | ZONE |
Finds the name of the variable that holds the value of
client.rack .
| |
KAFKA_MANAGED_BY_HUMIO | true | Set/unset LogScale to create topics and manage replicas in Kafka | |
KAFKA_SERVERS | Kafka bootstrap servers list | ||
LDAP_AUTH_PRINCIPAL | Allows to transform LogScale login usernames so to enable LDAP authentication | ||
LDAP_AUTH_PRINCIPALS_REGEX | Separates multiple patterns with users in more locations within LDAP | ||
LDAP_AUTH_PROVIDER_CERT | Specifies the PEM-format value of the certificate required when enabling TLS/SSL-secured communications to the LDAP server | ||
LDAP_AUTH_PROVIDER_URL | The URL to connect to for LDAP authentication | ||
LDAP_DOMAIN_NAME | Allows users to login with their username and not domain name | ||
LDAP_GROUP_BASE_DN | The query to perform to get the user's groups for LDAP | ||
LDAP_GROUP_FILTER | LDAP_GROUP_FILTER Environment Variable | ||
LDAP_GROUP_SEARCH_BIND_FOR_LOOKUP | false | LDAP_GROUP_SEARCH_BIND_FOR_LOOKUP Environment Variable | |
LDAP_GROUPNAME_ATTRIBUTE | Allows using an alternate attribute on the group record in LDAP as the group name in LogScale RBAC configuration | ||
LDAP_SEARCH_BASE_DN | Sets the base DN for LDAP-Search authentication method | ||
LDAP_SEARCH_DOMAIN_NAME | LDAP_SEARCH_DOMAIN_NAME Environment Variable | ||
LDAP_SEARCH_FILTER | LDAP_SEARCH_FILTER Environment Variable | ||
LDAP_USERNAME_ATTRIBUTE | Allows choosing some attribute in the LDAP user record as the username in LogScale | ||
LDAP_VERBOSE_LOGGING | false | LDAP_VERBOSE_LOGGING Environment Variable | |
LIVEQUERY_CANCEL_COST_PERCENTAGE | 10 | Backlog allowed before canceling the queries with the highest cost | |
LIVEQUERY_CANCEL_TRIGGER_DELAY_MS | 20,000 ms | Sets cancelling of the most consuming live queries | |
LOCAL_STORAGE_MIN_AGE_DAYS | Minimum number of days to keep a fresh segment file before it is deleted locally | ||
LOCAL_STORAGE_PERCENTAGE | 85 % | Sets a limit to the percentage of disk full | |
LOCAL_STORAGE_PREFILL_PERCENTAGE | 70% of LOCAL_STORAGE_PERCENTAGE | Configures eager prefilling of disks from bucket storage | |
MAX_BUCKET_POINTS | 10,000 | MAX_BUCKET_POINTS Environment Variable | |
MAX_CHARS_TO_FIND_TIMESTAMP | Sets the number of characters searched by the findTimestamp() function | ||
MAX_CONCURRENT_EXPORTS_PER_VIEW | 10 | MAX_CONCURRENT_EXPORTS_PER_VIEW Environment Variable | |
MAX_DISTINCT_TAG_VALUES | 1,000 | Allows auto-grouping of tags | |
MAX_EVENT_FIELD_COUNT | 1,000 fields | Sets the enforced maximum number of fields in an event in the ingest phase | |
MAX_EVENT_FIELD_COUNT_IN_PARSER | 50,000 fields | Specifies the number of fields allowed within the parser | |
MAX_EVENT_SIZE | 1 MiB | Specifies the maximum allowed event size | |
MAX_FILEUPLOAD_SIZE | 104,857,600 bytes | Specifies the maximum size of uploaded files. | |
MAX_GRAPHQL_QUERY_DEPTH | 11 | MAX_GRAPHQL_QUERY_DEPTH Environment Variable | |
MAX_HOURS_SEGMENT_OPEN | 24 hours | The maximum number of hours a merge target will remain open for writing before being closed. | |
MAX_INGEST_DELAY_SECONDS | 3,600 seconds | Events backlog allowed before LogScale starts responding on http interfaces | |
MAX_INGEST_REQUEST_SIZE | 33,554,432 bytes | Size limit of ingest requests after content-encoding has been applied. | |
MAX_INTERNAL_STATESIZE | 128, HUMIO_MEMORY_OPTS / 1024) MiB | Sets the size of query states | |
MAX_JITREX_BACKTRACK | 1,000 | Limits CPU resources spent in a regex match | |
MAX_JOIN_LIMIT | 200,000 rows |
Sets the limit parameter of the join()
function.
| |
MAX_NUMBER_OF_GLOBALDATA_DUMPS_TO_KEEP | 20 data dumps | Maximum number of global data dumps | |
MAX_SECS_WAIT_FOR_SYNC_WHEN_CHANGING_DIGEST_LEADER | 5 minutes | Specifies when digest coordination will permit a node that is not in sync | |
MAX_SERIES_LIMIT | 500 series | Determines the max amount of series in a bucket and/or timechart. | |
MAX_SERIES_MEMLIMIT | Determines the maximum memory for a series in a bucket and/or timechart. | ||
MAXMIND_ACCOUNT_ID | true | Sets automatic update of MaxMind IP location database | |
MAXMIND_BASE_URL | Enables to change the base path to download MaxMind from | ||
MAXMIND_EDITION_ID | deprecated in 1.19.0 | Deprecated, replaced by MAXMIND_IP_LOCATION_EDITION_ID | |
MAXMIND_IP_LOCATION_EDITION_ID | Allows to use an alternative MaxMind database for IP location information (optional) | ||
MAXMIND_LICENSE_KEY | Where to specify the license key for your account if you have a MaxMind license | ||
MERGE_TARGET_RETENTION_PERCENTAGE | 3.33 % | The minimum desired merge-result segments based on retention size/time. | |
MINI_SEGMENT_MAX_MERGE_DELAY_MS_BEFORE_WARNING | 48 ms | Logs a warning if mini segment is not merged | |
MINISEGMENT_PREMERGE_MAX_BLOCKS | number of blocks in normal mini segments | Configures how many blocks are allowed in the merge result from merging mini segments into larger mini segments. | |
MINISEGMENT_PREMERGE_MIN_FILES | 12 minisegments | Minimum number of mini segments that must go into a merge. | |
NODE_ROLES | all | Select the logical roles for a node within the LogScale cluster | |
OIDC_USE_HTTP_PROXY | true | Whether to use the HTTP proxy for calling OIDC | |
OIDC_AUDIENCE | Audience to expect in a JWT | ||
OIDC_AUTHORIZATION_ENDPOINT | URL to endpoint user is redirected to when authorizing | ||
OIDC_CACHE_USERINFO_MS | 600,000 ms | How long user info is cached on a LogScale node | |
OIDC_GROUPS_CLAIM | humio-groups | Claim name to interpret as the groups in LogScale | |
OIDC_JWKS_URI | URL to JWKS endpoint for keys to validate tokens | ||
OIDC_OAUTH_CLIENT_ID | Client ID of OpenID application | ||
OIDC_OAUTH_CLIENT_SECRET | Client secret of OpenID application | ||
OIDC_PROVIDER | URL to the OpenID Connect provider | ||
OIDC_SCOPES | OIDC_SCOPES Environment Variable | ||
OIDC_SERVICE_NAME | OpenID Connect | OIDC provider name displayed at sign in | |
OIDC_TOKEN_ENDPOINT | URL to token endpoint used to exchange authentication code to an access token | ||
OIDC_TOKEN_ENDPOINT_AUTH_METHOD | Authorization method for a token endpoint | ||
OIDC_USERINFO_ENDPOINT | URL to user info endpoint to retrieve user information from an access token | ||
OIDC_USERNAME_CLAIM | humio-user | Name of the claim to interpret as username in LogScale | |
ONLY_CREATE_USER_IF_SYNCED_GROUPS_HAVE_ACCESS | false | Configures whether users are created if synced groups have access to>sandbox and sys repositories | |
POSTMARK_FROM | Send emails using the Postmark service | ||
POSTMARK_SERVER_SECRET | Sets the values for your server's token when using the Postmark service | ||
PRIMARY_STORAGE_MAX_FILL_PERCENTAGE | Primary segment files' storage limit | ||
PRIMARY_STORAGE_PERCENTAGE | Primary segment files' storage limit | ||
PROMETHEUS_METRICS_PORT | Enables Prometheus to scrape metrics from LogScale | ||
PUBLIC_URL | Public URL where LogScale instance is reachable from a browser | ||
QUERY_CACHE_MIN_COST | 1,000L | Enables/disables caching when using features that store a copy of live search results to the local disk | |
QUERY_COORDINATOR | true | deprecated in 1.119.0 | Sets whether the current node should act as a query coordinator |
QUERY_EXECUTOR_CORES | Sets the number of CPU cores to reduce pressure on context switching due to hyper-threading | ||
RDNS_DEFAULT_SERVER |
| Default server to use for reverse DNS queries using rdns() function | |
READ_GROUP_PERMISSIONS_FROM_FILE | false | Allows groups and roles to be converted to new RBAC model and visible under Administration in read-only | |
S3_ARCHIVING_ACCESSKEY | Sets the S3 access keys for archiving ingested logs in export format | ||
S3_ARCHIVING_ENDPOINT_BASE | Allows to point to a non-AWS endpoint for archiving | ||
S3_ARCHIVING_SECRETKEY | Sets the S3 secret key for archiving of ingested logs in an export format | ||
S3_ARCHIVING_USE_HTTP_PROXY | true | Whether to use the globally configured HTTP proxy for communicating with S3 | |
S3_ARCHIVING_WORKERCOUNT | 1 | Sets the number of parallel workers for upload | |
S3_EXPORT_USE_HTTP_PROXY | true | Enables/disables HTTP proxy configured for exporting to Amazon S3 | |
S3_RECOVER_FROM_KMS_KEY_ARN | Arn to the KMS key when using server side encryption on a recovery bucket | ||
S3_STORAGE_2_KMS_KEY_ARN | ARN to the KMS key when using server side encryption on a 2nd bucket | ||
S3_STORAGE_ACCESSKEY | Sets the access key for S3 storage | ||
S3_STORAGE_BUCKET | Bucket storage S3 variant | ||
S3_STORAGE_CONCURRENCY | cores/2 | The number of concurrent downloading/uploading files in S3 storage | |
S3_STORAGE_ENCRYPTION_KEY | Sets the encryption key for S3 storage | ||
S3_STORAGE_ENDPOINT_BASE | Sets the URL for pointing to your own non-AWS endpoint for S3 storage | ||
S3_STORAGE_KMS_KEY_ARN | ARN to the KMS key when using server side encryption on a bucket | ||
S3_STORAGE_OBJECT_KEY_PREFIX | Sets the optional prefix for all object keys | ||
S3_STORAGE_PREFERRED_COPY_SOURCE | false | Sets how to download segments from bucket storage when prefetching | |
S3_STORAGE_REGION | S3_STORAGE_REGION Environment Variable | ||
S3_STORAGE_SECRETKEY | Sets Secret Key for S3 bucket storage | ||
S3_STORAGE_USE_HTTP_PROXY | true | Enables/disables HTTP proxy for communicating with Amazon Bucket Storage | |
SAML_ALTERNATIVE_IDP_CERTIFICATE | Provides an alternative certificate for authentication | ||
SAML_DEBUG | false | SAML_DEBUG Environment Variable | |
SAML_GROUP_MEMBERSHIP_ATTRIBUTE | Synchronizes the groups upon successful login in LogScale | ||
SAML_IDP_CERTIFICATE | Provides a certificate for authentication | ||
SAML_IDP_ENTITY_ID | IDP identifier used internally in the authentication flow | ||
SAML_IDP_SIGN_ON_URL | User accessing LogScale is redirected to this variable and authentication flow starts | ||
SAML_USER_ATTRIBUTE | Allows to set a different user attribute name | ||
SANGRIA_LOG_SLOW_MILLIS | SANGRIA_LOG_SLOW_MILLIS Environment Variable | ||
SCHEDULED_SEARCH_BACKFILL_LIMIT | 5 | Configures the global maximum backfill limit for scheduled searches | |
SCHEDULED_SEARCH_DESPITE_WARNINGS | false | Configures actions trigger in schedules searches in case of warnings | |
SCHEDULED_SEARCH_MAX_WAIT_FOR_MISSING_DATA | false | Sets the maximum time a schedule search will be retried in case of missing data warnings | |
SECONDARY_DATA_DIRECTORY | Enables a secondary file system to store segment files | ||
SECONDARY_STORAGE_MAX_FILL_PERCENTAGE | Sets the limit for secondary storage in percentage | ||
SEND_USER_INVITES | true | Sets whether to send email invitations | |
SHARED_DASHBOARDS_ENABLED | true | Allows to disable shared dashboards | |
SHUTDOWN_ABORT_FLUSH_TIMEOUT_MILLIS | 30,000 ms | How long the digest worker thread keeps working on flushing the contents of in-memory buffers at shutdown | |
SINGLE_USER_PASSWORD | Sets the password for single-user authentication mode | ||
SINGLE_USER_USERNAME | user | Sets the username for single-user authentication mode | |
SMTP_HOST | Allows to send emails using an SMTP server | ||
SMTP_PASSWORD | Sets the secret password when using an SMTP server for emails | ||
SMTP_PORT | Sets the port number when using an SMTP server for emails | ||
SMTP_SENDER_ADDRESS | Sets your sender address when using an SMTP server for emails | ||
SMTP_USE_STARTTLS | Enables/disables StartTLS when using an SMTP server for emails | ||
SMTP_USERNAME | Sets your username when using an SMTP server for emails | ||
STATIC_IMAGE_CONTENT_URL | Allows note widgets to display images from the configured URL | ||
STORAGE_REPLICATION_FACTOR | Sets the replication factor for storage | ||
STREAMING_QUERY_KEEPALIVE_NEWLINES | false | Whether to emit a newline into streaming query responses | |
STREAMING_QUERY_KEEPALIVE_NEWLINES_ON_NODES | false | Whether to emit a newline into streaming query responses for internal requests | |
STREAMING_QUERY_KEEPALIVE_TIMEOUT | unset | The keep-alive duration to set on HTTP responses for streaming queries | |
TABLE_CACHE_MAX_STORAGE_FRACTION | 0.001 | introduced in 1.148.0 |
Fraction of disk space allowed for caching file data used for
query functions such as match() and
readFile() .
|
TABLE_CACHE_MAX_STORAGE_FRACTION_FOR_INGEST_AND_HTTP_ONLY | 0.1 | introduced in 1.148.0 |
Fraction of disk space allowed on
ingest
or
httponly
node for caching file data used for query functions such as
match() and readFile() .
|
TAG_HASHING_BUCKETS | 32 | Used to support auto-grouping of tags | |
TCP_INGEST_MAX_TIMEOUT_SECONDS | Sets the timeout for TCP ingest listeners | ||
THREAD_SIZE_LOGGING_INTERVAL_SECONDS | THREAD_SIZE_LOGGING_INTERVAL_SECONDS Environment Variable | ||
TLS_CIPHER_SUITES | Used to set the allowed TLS protocols and cipher suites | ||
TLS_CLIENT_ALIAS | Alias of the key in the keystore to use when a client request is made from other LogScale instances or to a webhook notifier | ||
TLS_CLIENT_AUTH | false | Whether to require TLS client authentication | |
TLS_DEFAULT_ALIAS | Alias of the key in the keystore to use when serving a client without an SNI extension header | ||
TLS_HOSTNAME_VERIFICATION_FILTER | Whether to perform hostname verification | ||
TLS_KEY_PASSWORD | The key password for TLS | ||
TLS_KEYSTORE_TYPE | The type of keystore, either PKCS12 or JKS | ||
TLS_PROTOCOLS | Sets the TLS protocols to allow when communicating | ||
TLS_SERVER | Whether TLS should be used when serving the web interface | ||
TLS_TRUSTSTORE_LOCATION | Path to the truststore | ||
TLS_TRUSTSTORE_PASSWORD | Password to unlock the truststore, if any | ||
TLS_TRUSTSTORE_TYPE | The type of truststore, either PKCS12 or JKS | ||
TOP_K_MAX_MAP_SIZE_HISTORICAL | 32 * 1,024 bytes | TOP_K_MAX_MAP_SIZE_HISTORICAL Environment Variable | |
TOP_K_MAX_MAP_SIZE_LIVE | 8 * 1,024 bytes | TOP_K_MAX_MAP_SIZE_LIVE Environment Variable | |
TOPIC_MAX_MESSAGE_BYTES | 8,388,608 bytes | When LogScale is managing Kafka, overrides the default message size. | |
UI_AUTH_FLOW | true | UI_AUTH_FLOW Environment Variable | |
UNSAFE_RELAX_FEDERATED_PROTOCOL_VERSION_CHECK | false | Permits version discrepancy across clusters in Multi-Cluster Search, provided that the versions are compatible at the protocol level. | |
USING_EPHEMERAL_DISKS | false | Whether to use ephemeral disks on all nodes | |
VALUE_DEDUP_LEVEL | Limits the CPU time spent on removing duplication of values | ||
VERBOSE_AUTH | false | VERBOSE_AUTH Environment Variable | |
WARN_ON_INGEST_DELAY_MILLIS | 120,000 ms | Warns when ingest is delayed | |
ZONE | When set, allows to spread spread partitions across the different zones |