Full Release Notes Index

This section contains a single page with all release notes on the same page.

Important

If you are using this to look for specific release note entries or changes, please use Search Release Notes page instead, which provides a much richer set of functionality for finding sepeicifc types, different types, and across specific versions.

Falcon LogScale 1.169.0 GA (2024-12-17)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.169.0GA2024-12-17

Cloud

Next LTSNo1.136No

Available for download two days after release.

Bug fixes and updates.

Breaking Changes

The following items create a breaking change in the behavior, response or operation of this release.

  • GraphQL API

    • The new parameter strict has been added to the input of analyzeQuery() GraphQL query. When set to default value true, query validation will always validate uses of saved query and query parameter. When set to false, it will attempt to ignore validation of saved query and query parameter uses. This is a breaking change because previously, validation would behave as if strict was set to false. To achieve legacy behavior, set strict=false.

  • Storage

    • There is a change to the archiving logic so that LogScale no longer splits a given segment into multiple bucket objects based on ungrouped tag combinations in the segment. Tag groups were introduced to limit the number of datasources if a given tag had too many different values. But the current implementation of archiving splits the different tag combinations contained in a given segment back out into one bucket per tag combination, which is a scalability issue, and can also affect mini-segment merging. The new approach just uploads into one object per segment. As a visible impact for the user, there will be fewer objects in the archiving bucket, and the naming schema for the objects will change to not include the tags that were grouped into the tag groups that the datasource is based on. The set of events in the bucket will remain the same. This is a cluster risk, so the change is released immediately.

      For self-hosted customers: if you need time to change the external systems that read from the archive due to the naming changes, you may disable the DontSplitSegmentsForArchiving feature flag (see Enabling & Disabling Feature Flags).

      For more information, see Tag Grouping.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

New features and improvements

  • Administration and Management

    • Usage is now logged to the humio repository.

  • Ingestion

    • Clicking Run tests on the parser editor page now produces events that are more similar to what an ingested event would look like in certain edge cases.

    • You can now validate whether your parser complies to the ??? by clicking the checkbox Use CPS in the parser editor.

      For more information, see Normalize and Validate Against CPS Schema.

  • Functions

Fixed in this release

  • Queries

    • The query table endpoint client has been fixed as it was unable to receive the response for tables larger than 128 MB, and an error occurred.

    • A performance regression in the query scheduler has been fixed as it could lead to query starvation and slow searches.

Improvement

  • Storage

    • Improved performance when syncing IOCs internally within nodes in a cluster.

    • Improved the performance of ingest queue message handling that immediately follows a change in the Kafka partition count. Without this improvement, changing the partition count could substantially slow down processing of events ingested before the repartitioning.

    • Relocation of datasources after a partition count change will now be restarted if the Kafka partition count changes again while the cluster is executing relocations. This ensures that datasource placement always reflects the latest partition count.

  • Functions

    • Improving the error message for missing time zones in the parseTimestamp() function.

Falcon LogScale 1.168.0 GA (2024-12-10)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.168.0GA2024-12-10

Cloud

Next LTSNo1.136No

Available for download two days after release.

Bug fixes and updates.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

New features and improvements

  • Administration and Management

    • Metrics made available on the Prometheus HTTP API have been modified so that the internal metrics that represent "meters" no longer become type=COUNTER in Prometheus, but instead are type=SUMMARY. The suffix on the name changes from _total to _count due to this. This adds reporting if 1, 5 and 15 minute rates.

  • Storage

    • Cluster statistics such as compressed byte size and compressed file of merged subset only count aux files at most once. Previously, the statistic counted every local aux file in the cluster, which would increase with the replication factor, but that sum of aux file sizes was added to a sum of segment file sizes which did not consider the replication factor.

      From the user point of view, this change does not affect the ingest accounting and measurements, but it does affect the following other items:

      • The semantics of the compressedByteSize, compressedByteSizeOfMerged and dataVolumeCompressed fields in the ClusterStatsType, RepositoryType and OrganizationStats graphql types are changed: now file sizes of both segments and aux files are only counted once.

      • These values are shown for example on the front-page, and will be smaller than the old values.

      • Retention by compressed file size will keep more segments, since we delete segments to keep under the actual limit, which is calculated as the configured limit minus the aux file sizes.

      For more information, see Cluster statistics.

  • Configuration

    • Clusters using an HTTP proxy can now choose to have calls to the token endpoint for Google, Bitbucket, Github and Auth0 providers go through this proxy. This is configured by using the following new configuration values:

      The default value for all of these is false, so there is no change to how existing clusters are configured to use Google, Bitbucket, Github or Auth0.

  • Dashboards and Widgets

    • The Table widget cells will now show a warning along with the original value if decimal places are configured to be below 0 or above 20.

Fixed in this release

  • UI Changes

    • The dialog for creating a new group did not close automatically after successfully creating a group. This issue has been fixed.

    • The Saved query dialog has been fixed so that the saved queries are now sorted.

    • The Filter Match Highlighting feature could be deactivated for some regular expression results due to a stack overflow issue in the JavaScript Regular Expression engine. This issue has been fixed and the highlighting now works as expected.

  • API

    • filterQuery in API Query metaData was incorrect when using filters with implicit AND after aggregators. For example, groupBy(x) | y=* z=* would incorrectly give y=* z=* for the filterQuery, whereas * is the correct filterQuery. This issue has existed since 1.160.0 and it has now been fixed. You can work around the issue by explicitly adding | between filters.

  • Dashboards and Widgets

    • In the Time Chart widget, the Step after interpolation method would not display the line or area correctly when used with the Show gaps method for handling missing values.

    • In the Time Chart widget, an issue has been fixed where values below the minimum value of a Logarithmic axis would not be displayed, but values below 0 would.

  • Queries

    • Some queries (especially live queries) would continuously send a warning about missing data. This could happen if the query was planned at a time when there were cluster topology changes. This issue has been fixed and, instead of sending the warning, the query will now automatically restart since there might be more data to search.

    • Queries could sometimes fail and return an IndexOutOfBoundsException error. This issue has been fixed.

  • Functions

    • Fixed an issue where parseCEF() would stop a parser or query upon encountering invalid key-value pairs in the CEF extensions field. For example, in:

      Jun 09 02:26:06 zscaler-nss CEF:0||||||| xx==

      since the CEF specification dictates that = must be escaped if it is meant as a value, the second = would trigger the issue as it is no longer a valid key-value.

      If such an error is encountered, the event is left unparsed and a parser error field will be added.

Known Issues

  • Functions

    • A known issue in the implementation of the defineTable() function means it is not possible to transfer generated tables larger than 128MB. The user receives an error if the generated table exceeds that size.

Improvement

  • Storage

    • Improved performance of replicating IOC files to allow faster replication.

Falcon LogScale 1.167.0 GA (2024-12-03)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.167.0GA2024-12-03

Cloud

Next LTSNo1.136No

Available for download two days after release.

Bug fixes and updates.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

New features and improvements

  • Installation and Deployment

    • Added support for communicating between PDF Render Service and LogScale using a HTTP client rather than requiring HTTPS.

  • UI Changes

    • In the Inspection panel, case-insensitive search is now allowed when searching for field names. For example, repo and Repo will now match repo if this field is present.

  • Storage

    • The frequency of Kafka deletions has been reduced from once per minute to once per 10 minutes with the aim of reducing the load on global. As a consequence of this change, Kafka will retain slightly more data.

  • API

    • filterQuery in API Query metaData now searches using the same timestamp field as the original query — the one set in the UI in the Time field selection. For example, it returns useIngestTime=true if the original query used the @ingesttimestamp field.

  • Configuration

  • Ingestion

    • The error preview for test cases on the Parsers page now shows if there are additional errors.

  • Functions

    • The wildcard() function has an additional parameter: includeEverythingOnAsterisk. When this parameter is set to true, and pattern is set to *, the function will also match events that are missing the field specified in the field parameter.

      For more information, see wildcard().

Fixed in this release

  • UI Changes

  • Storage

    • An issue has been fixed which could in rare cases cause data loss of recently digested events due to improper cache invalidation of the digester state.

  • Dashboards and Widgets

    • The usage of filter for dashboards has been fixed:

      • An active dashboard filter was not being applied to the query before opening a dashboard widget query in the Search view.

      • Dashboard filters are no longer applied when editing a dashboard widget in the Search view.

  • Queries

    • An error in the query execution could lead to a query that would not progress and not stop, and would appear to hang indefinitely. This could happen when hosts were removed from the cluster. This issue has now been fixed.

Known Issues

  • Functions

    • A known issue in the implementation of the defineTable() function means it is not possible to transfer generated tables larger than 128MB. The user receives an error if the generated table exceeds that size.

Falcon LogScale 1.166.0 GA (2024-11-26)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.166.0GA2024-11-26

Cloud

Next LTSNo1.136No

Available for download two days after release.

Bug fixes and updates.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

Upgrades

Changes that may occur or be required during an upgrade.

  • Installation and Deployment

    • The minimum supported version that LogScale can be upgraded from has increased from 1.112 to 1.136. This change allows for removal of some obsolete data from LogScale database.

    • The Kafka client has been upgraded to 3.9.0.

New features and improvements

  • Security

    • Users granted with the ReadAccess permission on the repository can now read files in read-only mode.

  • Automation and Alerts

    • Updated the wording on a number of error and warning messages shown in the UI for alerts and scheduled searches.

  • Dashboards and Widgets

    • Sections in the Styling panel for all widgets are now collapsible.

  • Functions

    • When the @timestamp field is used in collect(), a warning has been added because collecting @timestamp will usually not return any results unless there's only one unique timestamp or the limit parameter has been given an argument of 1. A work-around is to rename or create a new field with the value of timestamp and collect that field instead, for example:

      logscale
      timestamp := @timestamp
      | collect(timestamp)
  • Other

    • Added organization to logs from building parsers.

      When logging organizations, the name is now logged with key organizationName instead of name.

Fixed in this release

  • UI Changes

    • The layout of the Table widget has been fixed due to a a vertical scroll bar that was appearing inside the table even when rows took up minimum space. This would lead to users having to scroll in the table to see the last row.

  • Queries

    • The Query stats panel on the Organization Query Monitor was reporting misleading information about total number of running queries, total number of live queries etc. when there were more than 1,000 queries that matched the searched term. This has been fixed by changing the global part of the result of the runningQueries graphql query, although the list of specific queries used to populate the table on the page is still capped at 1,000.

  • Functions

    • Matching on multiple rows in glob mode missed some matching rows. This happened in cases where there were rows with different glob patterns matching on the same event. For example, using a file example.csv:

      csv
      column1, column2 
      ab*,      one      
      a*,       two      
      a*,       three

      And the query:

      logscale
      match(example.csv, field=column1,
      mode=glob, nrows=3)

      An event with the field column1=abc would only match on the last two rows. This issue has been fixed so that all three rows will match on the event.

    • objectArray:eval() has been fixed as it did not work on array names containing an array index, for example objectArray:eval(array="myArray[0].foo[]", ...).

    • The defineTable()function in Ad-hoc tables has been fixed as it did not use the ingest timestamp for time range specification provided by the primary query, using the event timestamp instead. This issue only affected queries where the primary query used ingest timestamps.

    • The defineTable() function in Ad-hoc tables has been fixed as it incorrectly used UTC time zone for query start and end timestamps, regardless of the primary query's time zone. This issue only affected queries where the primary query used a non-UTC time zone, and either of the following:

      • the primary query's time interval used calendar-based presets (like calendar:2d, or now@week), or:

      • the sub-query used any query function that uses the timezone, for example timeChart(), bucket(), and any time:* function.

Known Issues

  • Functions

    • A known issue in the implementation of the defineTable() function means it is not possible to transfer generated tables larger than 128MB. The user receives an error if the generated table exceeds that size.

Improvement

  • UI Changes

    • The Search Link dashboard interaction now allows you to specify the target view/repository as Current repository. This setting allows for exporting and importing the dashboard in another view, while allowing the Search Link interaction to execute in the same view as the dashboard was imported to. Current repository is now the first suggested option in the Target repository drop-down list in Dashboard Link or Search Link interaction types.

  • Queries

    • In cases where a streaming query is unable to start — for example, if it refers to a file that does not exist — an error message is now returned instead of an empty string.

Falcon LogScale 1.165.1 LTS (2024-12-17)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.165.1LTS2024-12-17

Cloud

On-Prem

2025-12-31Yes1.112No

Hide file hashes

Show file hashes

Download

Bug fixes and updates.

Removed

Items that have been removed as of this release.

GraphQL API

  • Removed the following deprecated fields from the Cluster GraphQL type:

    • ingestPartitionsWarnings

    • suggestedIngestPartitions

    • suggestedIngestPartitions

    • storagePartitions

    • storagePartitionsWarnings

    • suggestedStoragePartitions

Configuration

  • The dynamic configuration and related GraphQL API AstDepthLimit has been removed.

  • The UNSAFE_ALLOW_FEDERATED_CIDR, UNSAFE_ALLOW_FEDERATED_MATCH, and ALLOW_MULTI_CLUSTER_TABLE_SYNCHRONIZATION environment variables have been removed as they now react as if they are always enabled.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

Upgrades

Changes that may occur or be required during an upgrade.

  • Installation and Deployment

    • The JDK has been upgraded to 23.0.1

New features and improvements

  • Security

    • Users can now view actions in restricted read-only mode when they have the Data read access permission on the repository or view.

    • Users can now see and use saved queries without needing the CreateSavedQueries and the UpdateSavedQueries permissions.

    • Users can now see actions in restricted read-only mode when they have the ReadAccess permission on the repository or view.

  • Installation and Deployment

  • UI Changes

    • PDF Render Service now supports proxy communication between service and LogScale. Adding the environment variable http_proxy or https_proxy to the PDF render service environment will add a proxy agent to all requests from the service to LogScale.

    • Documentation is now displayed on hover in the LogScale query editor within Falcon. The full syntax usage and a link to the documentation is now visible for any keyword in a query.

    • The Files page now features a new table view with enhanced search and filtering, making it easier to find and manage your files. You can now import multiple files at once.

      For more information, see Lookup Files.

    • When Saving Queries, saved queries now appear in sorted order and are also searchable.

    • Users with the ReadAccess permission on the repository or view can now view scheduled reports in read-only mode.

    • Files grouped by package are now displayed back again on the Files page including the Package Name column, which was temporarily unavailable after the recent page overhaul.

    • A custom dialog now helps users save their widget changes on the Dashboard page before continuing on the Search page.

  • Automation and Alerts

    • In the activity logs, the exception field now only contains the name of the exception class, as the remainder of what used to be there is already present in the exceptionMessage field.

    • Three alert messages were deprecated and replaced with new, more accurate alert messages.

      • For Legacy Alerts: The query result is currently incomplete. The alert will not be polled in this loop replaces Starting the query for the alert has not finished. The alert will not be polled in this loop.

      • For Filter Alerts and Aggregate Alerts: The query result is currently incomplete. The alert will not be polled in this run replaces Starting the alert query has not finished. The alert will not be polled in this run in some situations where it is more correct.

      • The alert message was updated for filter and aggregate alerts in some cases where the live query was stopped due to the alert being behind.

      For more information, see Monitoring Alert Execution through the humio-activity Repository.

    • The queryStart and queryEnd fields has been added for two aggregate alerts log lines:

      • Alert found results, but no actions were invoked since the alert is throttled

      • Alert found no results and will not trigger

      and removed for three others as they did not contain the correct value:

      • Alert is behind. Will stop live query and start running historic queries to catch up

      • Alert query took too long to start and the result are now too old. LogScale will stop the live query and start running historic queries to catch up

      • Running a historic query to catch up took too long and the result is now outside the retry limit. LogScale will skip this data and start a query for events within the retry limit

    • The Alerts page now shows the following UI changes:

      • A new column Last modified is added in the Alerts overview to display when the alert was last updated and by whom.

      • The same above column is added either in the alert properties side panel and in the Search page.

      • The Package column is no longer displayed as default on the Alerts overview page.

      For more information, see Creating an Alert from the Alerts Overview.

  • GraphQL API

    • The disableFieldAliasSchemaOnViews GraphQL mutation has been added. This mutation allows you to disable a schema on multiple views or repositories at once, instead of running multiple disableFieldAliasSchemaOnView mutations.

      For more information, see disableFieldAliasSchemaOnViews() .

    • New yamlTemplate fields have been created for Dashboard and SavedQuery datatypes. They now replace the deprecated templateYaml fields.

      For more information, see Dashboard , SavedQuery .

    • GraphQL introspection queries now require authentication. Setting the configuration parameter API_EXPLORER_ENABLED to false will still reject all introspection queries.

    • Added the permissionType field to the Group GraphQL type. This field identifies the level of permissions the group has (view, organization or system).

    • Added the following mutations:

      These mutations extend the functionality of the previous versions (without the V2 suffix) by returning additional information about the token such as the id, name, permissions, expiry and IP filters.

  • Storage

    • WriteNewSegmentFileFormat feature flag is now removed and the feature enabled by default to improve compression of segment files.

    • The amount of autoshard increase requests allowed has been reduced, to reduce pressure on global traffic from these requests.

  • API

    • Implemented support for returning a result over 1GB in size on the /api/v1/globalsubset/clustervhost endpoint. There is now a limit on the size of 8GB of the returned result.

  • Configuration

  • Dashboards and Widgets

    • Numbers in the Table widget can now be displayed with trailing zeros to maintain a consistent number of decimal places.

    • When configuring series for a widget, suggestions for series are now available in a dropdown list, rather than having to type the series out.

    • The Bar Chart widget can now be configured in the style panel with a horizontal or vertical orientation.

  • Ingestion

    • Query resources will now also account for reading segment files in addition to scanning files. This will enable better control of CPU resources between search and the data pipeline operations (ingest, digest, storage).

    • Increased a timeout for loading new CSV files used in parsers to reduce the likelihood of having the parser fail.

    • The way query resources are handled with respect to ingest occupancy has changed. If the maximum occupancy over all the ingest readers is less than the limit set (90 % by default), LogScale will not reduce resources for queries. The new configuration variable INGEST_OCCUPANCY_QUERY_PERMIT_LIMIT now allows to change such default limit of 90 % to adjust how busy ingest readers should be in order to limit query resources.

    • The toolbar of the Parser editor has been modified to be more in-line with the design of the LogScale layout. You can now find Duplicate, Settings and Export buttons under the ellipsis menu.

      For more information, see Parsing Data.

    • Added logging when a parser fails to build and ingest defaults to ingesting without parsing. The log lines start with Failed compiling parser.

  • Log Collector

  • Queries

    • LogScale Regular Expression Engine V2 is now optimized to support character match within a single line, e.g. /.*/s.

    • Ad-hoc tables feature is introduced for easier joins. Use the defineTable() function to define temporary lookup tables. Then, join them with the results of the primary query using the match() function. The feature offers several benefits:

      • Intuitive approach that now allows for writing join-like queries in the order of execution

      • Step-by-step workflow to create complex, nested joins easily.

      • Workflow that is consistent to the model used when working with Lookup Files

      • Easy troubleshooting while building queries, using the readFile()function

      • Expanded join use cases, providing support for:

        • inner joins with match(... strict=true)

        • left joins with match(... strict=false)

        • right joins with readFile() | match(... strict=false)

      • Join capabilities in LogScale Multi-Cluster Search environments (Self-Hosted users only)

      When match() or similar functions are used, additional tabs from the files and/or tables used in the primary query now appear in order in Search next to the Results tab. The tab names are prefixed by \"Table: \" to make it more clear what they refer to.

      For more information, see Using Ad-hoc Tables.

    • Changed the internal submit endpoint such that the requests logs correct information on whether the request is internal or not.

  • Functions

    • Improvements in the sort(), head(), and tail() functions: the error message when entering an incorrect value in the limit parameter now mentions both the minimum and the maximum configured value for the limit.

    • Introducing the new query function array:rename(). This function renames all consecutive entries of an array starting at index 0.

      For more information, see array:rename().

    • A new parameter trim has been added to the parseCsv() function to ignore whitespace before and after values. In particular, it allows quotes to appear after whitespace. This is a non-standard extension useful for parsing data created by sources that do not adhere to the CSV standard.

    • The following new functions have been added:

    • bitfield:extractFlags() can now handle unsigned 64 bit input. It can also handle larger integers, but only the lowest 64 bits will be extracted.

    • The wildcard() function has an additional parameter: includeEverythingOnAsterisk. When this parameter is set to true, and pattern is set to *, the function will also match events that are missing the field specified in the field parameter.

      For more information, see wildcard().

    • The following query functions limits have now their minimum value set to 1. In particular:

    • The new query functions crypto:sha1() and crypto:sha256() have been added. These functions compute a cryptographic SHA-hashing of the given fields and output a hex string as the result.

Fixed in this release

  • Security

    • OIDC authentication would fail if certain characters in the state variable were not properly URL-encoded when redirecting back to LogScale. This issue has been fixed.

  • UI Changes

    • Event List has been fixed as it would not take sorting from query API into consideration when sorting events based on UI configuration.

    • The red border appearing in the Table widget when invalid changes are made to a dashboard interaction is now fixed as it would not display correctly.

    • Dragging would stop working on the Dashboard page in cases where invalid changes were made and saved to a widget and the user would then click Continue editing. This issue has been fixed and the dragging now works correctly also in this case.

  • Automation and Alerts

    • Fixed an issue where the Action overview page would not load if it contained a large number of actions.

  • GraphQL API

    • role.users query has been fixed as it would return duplicate users in some cases.

  • Storage

    • Mini-segments would not be prioritized correctly when fetching them from bucket storage. This issue has now been fixed.

    • Segments were not being fetched on an owner node. This issue could lead to temporary under-replication and keeping events in Kafka.

    • Resolved a defect that could lead to corrupted JSON messages on the internal Kafka queue.

    • NullPointerException error occurring since version 1.156.0 when closing segment readers during redactEvent processing has now been fixed.

    • Several issues have been fixed, which could cause LogScale to replay either too much, or too little data from Kafka if segments with topOffsets were deleted at inopportune times. LogScale will now delay deleting newly written segments, even if they violate retention, until the topOffsets field has been cleared, which indicates that the segments cannot be replayed from Kafka later. Segment bytes being held onto in this way are logged by the RetentionJob as part of the periodic logging.

    • An extremely rare data loss issue has been fixed: file corruption on a digester could cause the cluster to delete all copies of the affected segments, even if some copies were not corrupt. When a digester detects a corrupt recently-written segment file during bootup, it will no longer delete that segment from Global. It will instead only remove the local file copy. If the segment needs to be deleted in Global because it's being replayed from Kafka, the new digest leader will handle that as part of taking over the partition.

    • Recently ingested data could be lost when the cluster has bucket storage enabled, USING_EPHEMERAL_DISKS is set to false, and a recently ingested segment only exists in bucket storage. This issue has now been fixed.

    • LogScale could spuriously log Found mini segment without replacedBy and a merge target that already exists errors when a repository is undeleted. This issue has been fixed.

  • API

    • An issue has been fixed in the computation of the digestFlow property of the query response. The information contained there would be stale in cases where the query started from a cached state or there were digest leadership changes (for example, in case of node restarts).

      For more information, see Polling a Query Job.

  • Dashboards and Widgets

    • Long values rendered in the Single Value widget would overflow the widget container. This issue has now been fixed.

    • Dashboard parameter values were mistakenly not used by saved queries in scenarios with parameter naming overlap and no saved query arguments provided.

  • Ingestion

    • Parser Assertions have been fixed as some would be marked as passing, even though they should be failing.

    • An erronous array gap detection has been fixed as it would detect gaps where there were none.

    • An error is no longer returned when running parser tests without test cases.

    • An issue has been fixed that could cause the starting position for digest to get stuck in rare cases.

  • Queries

    • Backtracking checks are now added to the optimized instructions for (?s).*? in the LogScale Regular Expression Engine V2. This prevents regexes of this type from getting stuck in infinite loops which are ultimately detrimental to a cluster's health.

    • Fixed an issue which could cause live query results from some workers being temporarily represented in the final result twice. The situation was transient and could only occur during digester changes.

    • Fixed an issue where a query would fail to start in some cases when the query cache was available. The user would see the error Recent events overlap span excluded from query using historicStartMin.

    • Stopping alerts and scheduled searches could create a Could not cancel alert query entry in the activity logs. This issue has now been fixed. The queries were still correctly stopped previously, but this bug led to incorrect logging in the activity log.

    • The query scheduler has been fixed for an issue that could cause queries to get stuck in rare cases.

  • Functions

    • In defineTable(), start and end parameters did not work correctly when the primary query's end time was a relative timestamp: the sub-query's time was relative to now, and it has now been fixed to be relative to the primary query's end time.

    • Error messages produced by the match() function could reference the wrong file. This issue has now been fixed.

  • Other

    • Query result highlighting would crash cluster nodes when getting filter matches for some regexes. This issue has been fixed.

Known Issues

  • Functions

    • A known issue in the implementation of the defineTable() function means it is not possible to transfer generated tables larger than 128MB. The user receives an error if the generated table exceeds that size.

    • The match() function misses some matching rows when matching on multiple rows in glob mode. This happens in cases where there are rows with different glob patterns matching on the same event. For example, using a file example.csv:

      Raw Events
      column1,column2
      ab*,one
      a*,two
      a*,three

      and the query:

      logscale
      match(example.csv, field=column1, mode=glob, nrows=3)

      An event with the field column1=abc will only match on the last two rows.

    • The match() function misses some matching rows when matching on multiple rows in cidr mode. This happens in cases where there are rows with different subnets matching the same event. For example, using a file example.csv:

      Raw Events
      subnet,value
      1.2.3.4/24,monkey
      1.2.3.4/25,horse

      and the query:

      logscale
      match(example.csv, field=subnet, mode=cidr, nrows=3)

      An input event with ip = 1.2.3.10 will only output:

      ip,value
      1.2.3.10,horse

      whereas the correct output should actually be:

      ip,value
      1.2.3.10,horse
      1.2.3.10,monkey

Improvement

  • UI Changes

    • Improving the information messages that are displayed in the query editor when errors with lookup files used in queries occur.

    • Improving the warnings given when performing multi-cluster searches across clusters running on different LogScale versions.

  • API

  • Queries

    • Worker query prioritization is improved in specific cases where a query starts off highly resource-consuming but becomes more efficient as it progresses. In such cases, the scheduler could severely penalize the query, leading to it being unfairly deprioritized.

    • Queries that refer to fields in the event are now more efficient due to an improvement made in the query engine.

Falcon LogScale 1.165.0 GA (2024-11-19)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.165.0GA2024-11-19

Cloud

2025-12-31No1.112No

Available for download two days after release.

Bug fixes and updates.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

New features and improvements

  • Security

    • Users can now see and use saved queries without needing the CreateSavedQueries and the UpdateSavedQueries permissions.

    • Users can now see actions in restricted read-only mode when they have the ReadAccess permission on the repository or view.

  • UI Changes

    • Users with the ReadAccess permission on the repository or view can now view scheduled reports in read-only mode.

    • Files grouped by package are now displayed back again on the Files page including the Package Name column, which was temporarily unavailable after the recent page overhaul.

  • GraphQL API

  • API

    • Implemented support for returning a result over 1GB in size on the /api/v1/globalsubset/clustervhost endpoint. There is now a limit on the size of 8GB of the returned result.

  • Configuration

  • Ingestion

    • Increased a timeout for loading new CSV files used in parsers to reduce the likelihood of having the parser fail.

    • Added logging when a parser fails to build and ingest defaults to ingesting without parsing. The log lines start with Failed compiling parser.

  • Functions

    • A new parameter trim has been added to the parseCsv() function to ignore whitespace before and after values. In particular, it allows quotes to appear after whitespace. This is a non-standard extension useful for parsing data created by sources that do not adhere to the CSV standard.

    • The following new functions have been added:

    • bitfield:extractFlags() can now handle unsigned 64 bit input. It can also handle larger integers, but only the lowest 64 bits will be extracted.

Fixed in this release

  • Security

    • OIDC authentication would fail if certain characters in the state variable were not properly URL-encoded when redirecting back to LogScale. This issue has been fixed.

  • GraphQL API

    • role.users query has been fixed as it would return duplicate users in some cases.

  • Storage

    • Recently ingested data could be lost when the cluster has bucket storage enabled, USING_EPHEMERAL_DISKS is set to false, and a recently ingested segment only exists in bucket storage. This issue has now been fixed.

    • LogScale could spuriously log Found mini segment without replacedBy and a merge target that already exists errors when a repository is undeleted. This issue has been fixed.

  • Functions

    • In defineTable(), start and end parameters did not work correctly when the primary query's end time was a relative timestamp: the sub-query's time was relative to now, and it has now been fixed to be relative to the primary query's end time.

  • Other

    • Query result highlighting would crash cluster nodes when getting filter matches for some regexes. This issue has been fixed.

Known Issues

  • Functions

    • A known issue in the implementation of the defineTable() function means it is not possible to transfer generated tables larger than 128MB. The user receives an error if the generated table exceeds that size.

Falcon LogScale 1.164.0 GA (2024-11-12)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.164.0GA2024-11-12

Cloud

2025-12-31No1.112No

Available for download two days after release.

Bug fixes and updates.

Removed

Items that have been removed as of this release.

Configuration

  • The dynamic configuration and related GraphQL API AstDepthLimit has been removed.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

New features and improvements

  • UI Changes

    • The Files page now features a new table view with enhanced search and filtering, making it easier to find and manage your files. You can now import multiple files at once.

      For more information, see Lookup Files.

    • When Saving Queries, saved queries now appear in sorted order and are also searchable.

  • Automation and Alerts

    • In the activity logs, the exception field now only contains the name of the exception class, as the remainder of what used to be there is already present in the exceptionMessage field.

  • GraphQL API

    • The disableFieldAliasSchemaOnViews GraphQL mutation has been added. This mutation allows you to disable a schema on multiple views or repositories at once, instead of running multiple disableFieldAliasSchemaOnView mutations.

      For more information, see disableFieldAliasSchemaOnViews() .

  • Storage

    • The amount of autoshard increase requests allowed has been reduced, to reduce pressure on global traffic from these requests.

  • Ingestion

    • The toolbar of the Parser editor has been modified to be more in-line with the design of the LogScale layout. You can now find Duplicate, Settings and Export buttons under the ellipsis menu.

      For more information, see Parsing Data.

Fixed in this release

  • Dashboards and Widgets

    • Dashboard parameter values were mistakenly not used by saved queries in scenarios with parameter naming overlap and no saved query arguments provided.

Falcon LogScale 1.163.0 GA (2024-11-05)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.163.0GA2024-11-05

Cloud

2025-12-31No1.112No

Available for download two days after release.

Bug fixes and updates.

Removed

Items that have been removed as of this release.

GraphQL API

  • Removed the following deprecated fields from the Cluster GraphQL type:

    • ingestPartitionsWarnings

    • suggestedIngestPartitions

    • suggestedIngestPartitions

    • storagePartitions

    • storagePartitionsWarnings

    • suggestedStoragePartitions

Configuration

  • The UNSAFE_ALLOW_FEDERATED_CIDR, UNSAFE_ALLOW_FEDERATED_MATCH, and ALLOW_MULTI_CLUSTER_TABLE_SYNCHRONIZATION environment variables have been removed as they now react as if they are always enabled.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

New features and improvements

  • Installation and Deployment

  • GraphQL API

    • Added the permissionType field to the Group GraphQL type. This field identifies the level of permissions the group has (view, organization or system).

    • Added the following mutations:

      These mutations extend the functionality of the previous versions (without the V2 suffix) by returning additional information about the token such as the id, name, permissions, expiry and IP filters.

  • Ingestion

    • Query resources will now also account for reading segment files in addition to scanning files. This will enable better control of CPU resources between search and the data pipeline operations (ingest, digest, storage).

  • Queries

    • Ad-hoc tables feature is introduced for easier joins. Use the defineTable() function to define temporary lookup tables. Then, join them with the results of the primary query using the match() function. The feature offers several benefits:

      • Intuitive approach that now allows for writing join-like queries in the order of execution

      • Step-by-step workflow to create complex, nested joins easily.

      • Workflow that is consistent to the model used when working with Lookup Files

      • Easy troubleshooting while building queries, using the readFile()function

      • Expanded join use cases, providing support for:

        • inner joins with match(... strict=true)

        • left joins with match(... strict=false)

        • right joins with readFile() | match(... strict=false)

      • Join capabilities in LogScale Multi-Cluster Search environments (Self-Hosted users only)

      When match() or similar functions are used, additional tabs from the files and/or tables used in the primary query now appear in order in Search next to the Results tab. The tab names are prefixed by \"Table: \" to make it more clear what they refer to.

      For more information, see Using Ad-hoc Tables.

    • Changed the internal submit endpoint such that the requests logs correct information on whether the request is internal or not.

  • Functions

Fixed in this release

  • Automation and Alerts

    • Fixed an issue where the Action overview page would not load if it contained a large number of actions.

  • Storage

    • Segments were not being fetched on an owner node. This issue could lead to temporary under-replication and keeping events in Kafka.

    • Resolved a defect that could lead to corrupted JSON messages on the internal Kafka queue.

  • Ingestion

    • An error is no longer returned when running parser tests without test cases.

  • Queries

    • Fixed an issue which could cause live query results from some workers being temporarily represented in the final result twice. The situation was transient and could only occur during digester changes.

    • Fixed an issue where a query would fail to start in some cases when the query cache was available. The user would see the error Recent events overlap span excluded from query using historicStartMin.

Falcon LogScale 1.162.0 GA (2024-10-29)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.162.0GA2024-10-29

Cloud

2025-12-31No1.112No

Available for download two days after release.

Bug fixes and updates.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

New features and improvements

  • Security

    • Users can now view actions in restricted read-only mode when they have the Data read access permission on the repository or view.

  • Storage

    • WriteNewSegmentFileFormat feature flag is now removed and the feature enabled by default to improve compression of segment files.

  • Configuration

    • The default value for MINISEGMENT_PREMERGE_MIN_FILES has been increased from 4 to 12. This results in less global traffic from merges, and reduces churn in bucket storage from mini-segments being replaced.

  • Dashboards and Widgets

    • When configuring series for a widget, suggestions for series are now available in a dropdown list, rather than having to type the series out.

  • Ingestion

    • The way query resources are handled with respect to ingest occupancy has changed. If the maximum occupancy over all the ingest readers is less than the limit set (90 % by default), LogScale will not reduce resources for queries. The new configuration variable INGEST_OCCUPANCY_QUERY_PERMIT_LIMIT now allows to change such default limit of 90 % to adjust how busy ingest readers should be in order to limit query resources.

Fixed in this release

  • Storage

    • NullPointerException error occurring since version 1.156.0 when closing segment readers during redactEvent processing has now been fixed.

    • Several issues have been fixed, which could cause LogScale to replay either too much, or too little data from Kafka if segments with topOffsets were deleted at inopportune times. LogScale will now delay deleting newly written segments, even if they violate retention, until the topOffsets field has been cleared, which indicates that the segments cannot be replayed from Kafka later. Segment bytes being held onto in this way are logged by the RetentionJob as part of the periodic logging.

    • An extremely rare data loss issue has been fixed: file corruption on a digester could cause the cluster to delete all copies of the affected segments, even if some copies were not corrupt. When a digester detects a corrupt recently-written segment file during bootup, it will no longer delete that segment from Global. It will instead only remove the local file copy. If the segment needs to be deleted in Global because it's being replayed from Kafka, the new digest leader will handle that as part of taking over the partition.

  • Ingestion

    • An issue has been fixed that could cause the starting position for digest to get stuck in rare cases.

  • Queries

    • Backtracking checks are now added to the optimized instructions for (?s).*? in the LogScale Regular Expression Engine V2. This prevents regexes of this type from getting stuck in infinite loops which are ultimately detrimental to a cluster's health.

    • Stopping alerts and scheduled searches could create a Could not cancel alert query entry in the activity logs. This issue has now been fixed. The queries were still correctly stopped previously, but this bug led to incorrect logging in the activity log.

  • Functions

    • Error messages produced by the match() function could reference the wrong file. This issue has now been fixed.

Improvement

  • API

  • Queries

    • Queries that refer to fields in the event are now more efficient due to an improvement made in the query engine.

Falcon LogScale 1.161.0 GA (2024-10-22)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.161.0GA2024-10-22

Cloud

2025-12-31No1.112No

Available for download two days after release.

Bug fixes and updates.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

Upgrades

Changes that may occur or be required during an upgrade.

  • Installation and Deployment

    • The JDK has been upgraded to 23.0.1

New features and improvements

  • UI Changes

    • A custom dialog now helps users save their widget changes on the Dashboard page before continuing on the Search page.

  • Configuration

  • Dashboards and Widgets

    • The Bar Chart widget can now be configured in the style panel with a horizontal or vertical orientation.

  • Functions

    • The new query functions crypto:sha1() and crypto:sha256() have been added. These functions compute a cryptographic SHA-hashing of the given fields and output a hex string as the result.

Fixed in this release

  • Storage

    • Mini-segments would not be prioritized correctly when fetching them from bucket storage. This issue has now been fixed.

  • Dashboards and Widgets

    • Long values rendered in the Single Value widget would overflow the widget container. This issue has now been fixed.

  • Queries

    • The query scheduler has been fixed for an issue that could cause queries to get stuck in rare cases.

Improvement

  • UI Changes

    • Improving the information messages that are displayed in the query editor when errors with lookup files used in queries occur.

  • Queries

    • Worker query prioritization is improved in specific cases where a query starts off highly resource-consuming but becomes more efficient as it progresses. In such cases, the scheduler could severely penalize the query, leading to it being unfairly deprioritized.

Falcon LogScale 1.160.0 GA (2024-10-15)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.160.0GA2024-10-15

Cloud

2025-12-31No1.112No

Available for download two days after release.

Bug fixes and updates.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

New features and improvements

  • UI Changes

    • PDF Render Service now supports proxy communication between service and LogScale. Adding the environment variable http_proxy or https_proxy to the PDF render service environment will add a proxy agent to all requests from the service to LogScale.

    • Documentation is now displayed on hover in the LogScale query editor within Falcon. The full syntax usage and a link to the documentation is now visible for any keyword in a query.

  • Automation and Alerts

    • Three alert messages were deprecated and replaced with new, more accurate alert messages.

      • For Legacy Alerts: The query result is currently incomplete. The alert will not be polled in this loop replaces Starting the query for the alert has not finished. The alert will not be polled in this loop.

      • For Filter Alerts and Aggregate Alerts: The query result is currently incomplete. The alert will not be polled in this run replaces Starting the alert query has not finished. The alert will not be polled in this run in some situations where it is more correct.

      • The alert message was updated for filter and aggregate alerts in some cases where the live query was stopped due to the alert being behind.

      For more information, see Monitoring Alert Execution through the humio-activity Repository.

    • The queryStart and queryEnd fields has been added for two aggregate alerts log lines:

      • Alert found results, but no actions were invoked since the alert is throttled

      • Alert found no results and will not trigger

      and removed for three others as they did not contain the correct value:

      • Alert is behind. Will stop live query and start running historic queries to catch up

      • Alert query took too long to start and the result are now too old. LogScale will stop the live query and start running historic queries to catch up

      • Running a historic query to catch up took too long and the result is now outside the retry limit. LogScale will skip this data and start a query for events within the retry limit

    • The Alerts page now shows the following UI changes:

      • A new column Last modified is added in the Alerts overview to display when the alert was last updated and by whom.

      • The same above column is added either in the alert properties side panel and in the Search page.

      • The Package column is no longer displayed as default on the Alerts overview page.

      For more information, see Creating an Alert from the Alerts Overview.

  • GraphQL API

    • GraphQL introspection queries now require authentication. Setting the configuration parameter API_EXPLORER_ENABLED to false will still reject all introspection queries.

  • Dashboards and Widgets

    • Numbers in the Table widget can now be displayed with trailing zeros to maintain a consistent number of decimal places.

  • Log Collector

  • Queries

  • Functions

    • Improvements in the sort(), head(), and tail() functions: the error message when entering an incorrect value in the limit parameter now mentions both the minimum and the maximum configured value for the limit.

    • Introducing the new query function array:rename(). This function renames all consecutive entries of an array starting at index 0.

      For more information, see array:rename().

Fixed in this release

  • UI Changes

    • Event List has been fixed as it would not take sorting from query API into consideration when sorting events based on UI configuration.

    • The red border appearing in the Table widget when invalid changes are made to a dashboard interaction is now fixed as it would not display correctly.

    • Dragging would stop working on the Dashboard page in cases where invalid changes were made and saved to a widget and the user would then click Continue editing. This issue has been fixed and the dragging now works correctly also in this case.

  • Storage

    • A regression introduced with the upgrade to Java 23 in version 1.158.0 has now been fixed. The issue broke SASL support for Kafka, see Kafka documentation for more information.

  • API

    • An issue has been fixed in the computation of the digestFlow property of the query response. The information contained there would be stale in cases where the query started from a cached state or there were digest leadership changes (for example, in case of node restarts).

      For more information, see Polling a Query Job.

  • Ingestion

    • Parser Assertions have been fixed as some would be marked as passing, even though they should be failing.

    • An erronous array gap detection has been fixed as it would detect gaps where there were none.

  • Queries

Improvement

  • UI Changes

    • Improving the warnings given when performing multi-cluster searches across clusters running on different LogScale versions.

Falcon LogScale 1.159.1 LTS (2024-10-31)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.159.1LTS2024-10-31

Cloud

2025-10-31Yes1.112No

Hide file hashes

Show file hashes

Download

Bug fixes and updates.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The following GraphQL mutations and field have been deprecated, since the starring functionality is no longer in use for alerts and scheduled searches:

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

  • The deprecated JDK-less server.tar.gz tarball release is no longer being published. Users should switch to either server-linux_x64.tar.gz or server-alpine_x64.tar.gz depending on their operating system.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Automation and Alerts

  • Storage

    • LogScale now avoids moving mini-segments to follow the digest nodes if the mini-segments are available in Bucket Storage. Instead, mini-segments will now be fetched as needed, when the digest leader is ready to merge them. This reduces the load on Global Database in some cases following a digest reassignment.

    • During digest reassignment, LogScale will now ignore mini-segments in Bucket Storage when deciding whether to switch merge targets because some mini-segments are not present locally. This should slightly reduce the load on Global Database during digest reassignment.

    • Allow live query updates to run on a new thread pool digestLive, but only for datasources that spend more time on these updates than allowed in the digester pool on live queries, or for many datasources, if their total load exceeds time available for digesters. This frees up time for the digesters, provided there is available CPU on the node.

    • LogScale now avoids moving merge targets to the digest leader during digest reassignment if those segments are already in Bucket Storage.

  • Ingestion

    • Falcon LogScale now improves decision-making around which segments a digest leader fetches as part of taking over leadership. This should reduce the incidence of small bits of data being replayed from Kafka unnecessarily, and may also reduce how often reassignment will trigger a restart of live queries.

      For more information, see Ingestion: Digest Phase.

  • Queries

    • When a digest node is unavailable, a warning is not attached to queries, but the queries are allowed to proceed.

      This way, the behaviour of a query is similar to the case where a segment cannot be searched, due to all the owning nodes being unavailable at the time of the query.

Upgrades

Changes that may occur or be required during an upgrade.

  • Installation and Deployment

    • The JDK has been upgraded to 23.0.1

    • Bundled JDK is now upgraded to Java 23.

    • Upgraded the Kafka clients to 3.8.0.

New features and improvements

  • Security

    • New view permissions have been added to allow for updating and deleting different types of assets in a view. For instance, granting a user the UpdateFiles permission in a view will allow the user to update files, but not delete or create files.

      View permissions added:

      These permissions can currently only be assigned using the LogScale GraphQL API and are not supported in the LogScale UI.

      For more information, see Repository & View Permissions.

    • View permissions to allow for creating different types of assets in a view have been added.

      For instance granting a user the CreateFiles permission in a view, will allow the user to create new files, but not edit existing files.

      These permissions can currently only be assigned using the LogScale GraphQL API.

      For more information, see Repository & View Permissions.

    • For multiple configured SAML IdP certificates, Falcon LogScale now enforces that at least one of them is valid and not expired. This prevents login failures that have occurred due to the expiration of one of the certificates.

      For more information, see Certificate Rotation.

    • Purpose of the repository&view permission ChangeTriggers has changed: it is now intended for creating, deleting and updating alerts and scheduled searches. This permission is no longer needed to view alerts and scheduled searches in read-only mode from theAlerts page: instead, the ReadAccess permission is required for that.

    • Creating roles that have an empty set of permissions is now supported in the role-permissions.json file file. To allow this, add the following line to the file:

      JAVASCRIPT
      "options": { "allowRolesWithNoPermissions": true
      }

      This ensures compatibility when migrating from previous view-group-permissions.json file, should this contain roles without permissions.

      For more information, see Setting up Roles in a File.

  • UI Changes

    • The Time Selector now allows setting advanced relative time ranges that includes both a start and an end, and time anchoring

      For more information, see Changing Time Interval, Advanced Time Syntax.

    • The maximum number of fields that can be added in a Field Aliasing schema has been increased from 50 to 1,000.

    • The logging for LogScale Multi-Cluster Search network requests have been improved by adding new endpoints that have the externalQueryId in the path and the federationId in a query parameter.

    • The proxy endpoints for LogScale Multi-Cluster Search have changed. Specific internal marked endpoints that match the external endpoints for proxying are added. This will improve the ability to track multi-cluster searches in the LogScale requests log.

  • Documentation

    • The naming structure and identification of release types has been updated. LogScale is available in two release types:

      • Generally Available (GA) releases — includes new functionality. Using a GA release gets you access to the latest features and functionality.

        GA releases are deployed in LogScale SaaS environments.

      • Long Term Support (LTS) releases — contains the latest features and functionality.

        LogScale on-premise customers are advised to install the LTS releases. LTS releases are provided approximately every six weeks.

        Security fixes are applied to the last three LTS releases.

  • GraphQL API

  • Configuration

    • The new dynamic configuration parameter ParserBacktrackingLimit has been added to govern how many new events can be created from a single input event in parsers.

      This was previously controlled by the QueryBacktrackingLimit configuration parameter, which now applies only to queries, thus allowing for finer control.

    • Kafka resets described at Switching Kafka do no longer occur by default. In order to provide safeguard against accidental misconfiguration, the ALLOW_KAFKA_RESET_UNTIL_TIMESTAMP_MS environment variable has been added, which per default ensures that Kafka resets are not allowed. With this variable unset, accidental Kafka resets are avoided until an administrator assents to having a Kafka reset performed.

      To intentionally perform a Kafka reset, administrators should set ALLOW_KAFKA_RESET_UNTIL_TIMESTAMP_MS to an epoch timestamp in near future (for instance now + one hour), which will make sure that the setting is automatically disabled again once the reset is complete.

      For more information, see ALLOW_KAFKA_RESET_UNTIL_TIMESTAMP_MS.

    • Mini-segments auto-tune their max block count, up to their limit from configuration. This allows bigger minis for fast datasources, which reduces the number of minis in the global change stream.

  • Dashboards and Widgets

    • Improved user experience for creating and configuring dashboards parameters, providing immediate feedback when the setup changes and improved error validation.

      • Saving changes in parameters settings does not require an additional step to apply the changes before saving the dashboard, making it consistent with saving all other dashboard configurations.

      • Changes in the Parameters settings side panel now give immediate feedback on the dashboard.

      • Errors in the parameters setup are now validated on dashboard save, informing users about identified issues.

      • In the Query Parameter type, the Query String field has been replaced with LogScale Query editor, providing rich query writing experience as well as syntax validation.

      • In the File Parameter type, additional validation was added to display a warning if the lookup file used as a source of suggestions was deleted.

      • Parameters have now additional states (error, warning, info) informing users about issues with the setup.

    • Added the ability to move dashboards parameters to a parameter panel from the configuration side panel.

    • Added the ability to drag widgets to Sections when in Editing dashboard mode.

  • Queries

    • Nested repetitions/quantifiers in the Falcon LogScale Regular Expression Engine v2 are now supported. Nested repetitions are constructions that repeat or quantify another regex expression that contains repetition/quantification. For instance, the regex:

      /(?<ipv4>(?:\d{1,3}\.){3}\d{1,3})/

      makes use of nested repetitions, namely:

      (?:\d{1,3}\.){3}

      For more information, see LogScale Regular Expression Engine V2.

    • Added support for using the new experimental LogScale Regular Expression Engine v2 by specifying the F flag, for example:

      logscale Syntax
      '/foo/F'

      The new engine is currently under development and while it can be faster in some cases, there may also be cases where it is slower.

      For more information, see LogScale Regular Expression Engine V2.

    • LogScale Regular Expression Engine v2 now improves the optimizer ability to make alternations into decision trees.

      For more information, see LogScale Regular Expression Engine V2.

    • Introducing a regex backtracking limit of 0,5 seconds pr. input for the Falcon LogScale Regex Engine v2. As soon as the regex starts backtracking to find matches, it is timed and cancelled if the backtracking to find a match exceeds 0.5 seconds. This is done to avoid instances of practically infinite backtracking, as can be the case with some regexes.

      For more information, see LogScale Regular Expression Engine V2.

    • Added optimizations for start-of-text regex expressions with LogScale Regular Expression Engine v2. In particular:

      /^X/

      and:

      /\AX/

      no longer try to match all positions in the string.

      When doing tests on large body of text, these optimizations have proven to be faster and shown improvements of ~202%, for example when tested against a collection of works by Mark Twain.

      For more information, see LogScale Regular Expression Engine V2.

    • Under the hood changes to how the size of certain events is estimated should now make query state size estimates more realistic.

      • Query warnings are now included in the activity logs for queries

      • When a query is rejected due to a validation exception, an activity log is added

      • Activity logs for queries are now generated for LogScale Self-Hosted

  • Functions

    • Introducing the new query function coalesce(). This function accepts a list of fields and returns the first value that is not null or empty. Empty values can also be returned by setting a parameter in the function.

      For more information, see coalesce().

    • Introducing the new query function array:drop(). This function drops all consecutive fields of a given array, starting from index 0.

      For more information, see array:drop().

    • The new objectArray:eval() query function is now available for processing structured/nested arrays.

      For more information, see objectArray:eval().

    • The array:eval() query function for processing flat arrays is no longer experimental.

      For more information, see array:eval().

Fixed in this release

  • UI Changes

    • The OIDC and SAML configuration pages under Organization settings have been fixed due to a tooltip containing a link that would close before users could click the link.

    • Entering new arguments for Multi-value Parameters in Dashboard Link would not actually insert the new argument into the list of arguments. This issue has now been fixed.

    • Suggestions for parameter values in the Interactions panel would not be able to find fields in the query result. This issue has now been fixed.

    • A minor UI issue in dropdown windows has been fixed e.g., the Time interval window popping up from the Time Selector would close if any text inside the window fields was selected and the mouse click was released outside the window.

    • Clean up state for multi-cluster searches that could result in a build up of memory used.

  • Automation and Alerts

    • The severity of log message Alert found no results and will not trigger for Aggregate Alerts has been adjusted from Warning to Info.

  • Storage

    • An issue has been fixed which could cause clusters with too few hosts online to reach the configured segment replication factor to run segment rebalancing repeatedly.

      The rebalancing now disables itself in such a situation, until enough nodes come back online that rebalancing will actually be able to reach the replication factor.

    • NullPointerException error occurring since version 1.156.0 when closing segment readers during redactEvent processing has now been fixed.

    • A regression introduced with the upgrade to Java 23 in version 1.158.0 has now been fixed. The issue broke SASL support for Kafka, see Kafka documentation for more information.

  • API

    • An issue has been fixed in the computation of the digestFlow property of the query response. The information contained there would be stale in cases where the query started from a cached state or there were digest leadership changes (for example, in case of node restarts).

      For more information, see Polling a Query Job.

  • Dashboards and Widgets

    • The tooltip description of a widget would be cut off if the widget took up the whole row. This issue has now been fixed.

    • Newline characters would not be escaped in the dashboard parameter input field, thus appearing as not being part of the value. This issue has now been fixed.

  • Ingestion

    • When creating a new event forwarding rule, the editor could not be editable in some cases. This issue has now been fixed.

    • Fixed issues related to searching for ingest timestamp:

      • Issues with the usage of the query state cache when searching by ingest timestamp.

      • Reject queries where query time interval starts before the UNIX epoch. This applies both when searching by ingest timestamp or event timestamp. Previously such a query by ingest timestamp would cause an error, but a query by event timestamp was allowed, but not useful as all events in LogScale have event timestamps after the UNIX epoch.

      • When searching by ingest timestamp, start() and end() functions now report the correct search range.

      • Use event timestamp in place of ingest timestamp if the latter is missing. In old versions of LogScale (prior to 1.15) ingest timestamp was not stored with events. In order to support correct filtering when searching via ingest timestamp also for such old data, LogScale now considers the event timestamp to be also the ingest timestamp.

  • Log Collector

    • Fixed a performance issue when sorting by config name in the Fleet Management overview which could result in 503s from the backend.

  • Queries

    • Fixed stale QuerySessions that could cause invalid queries to be re-used.

    • Stopping queries that use early stopping criteria were wrongly reported as Cancelled instead of Done. The query status has now been fixed.

    • Fixed an issue where non-greedy repetition and repetition of fixed width patterns would not adhere to the backtracking limit in the LogScale Regular Expression Engine V2.

    • A regression issue that occurred in LogScale version 1.142.0 has now been fixed: it could cause LogScale to exceed the limit on off-heap memory when running many queries concurrently.

      Queries hitting the limit on off-heap memory could be deprioritized more strongly than intended. This issue has now been fixed.

    • Query poll would not be re-tried on dashboards if the request timed out.

    • Building tables for a query would block other tables from being built due to an internal cache implementation behaviour, which has now been fixed.

  • Functions

    • Fixed some cases where writeJson() would output fields as numbers that are not supported by the JSON standard. These fields are now represented by strings in the output to ensure that the resulting JSON is valid.

    • A regression issue has been fixed in the match() function in cidr mode, which was significantly slower when doing submission of the query.

  • Other

    • Off-heap memory limiting might not apply correctly.

    • A regression issue where some uploaded files close to 2GB could fail to load has now been fixed.

Early Access

  • Security

    • It is now possible to map one IdP group name to multiple Falcon LogScale groups during group synchronization. Activate the OneToManyGroupSynchronization feature flag for this functionality. With the feature flag enabled, Falcon LogScale will map a group name to all Falcon LogScale groups in the organization that have a matching lookupName or displayName, while also performing validation for identical groups. If the multiple mapping feature is not enabled, the existing one-to-one mapping functionality remains unchanged.

      For more information on how feature flags are enabled, see Enabling & Disabling Feature Flags.

      For more information, see Group Synchronization.

  • Configuration

Improvement

  • UI Changes

    • The Amazon S3 archiving UI page now correctly points to the S3 Archiving documentation pages versioned for Self-Hosted and Cloud.

  • Automation and Alerts

    • The error message The alert query did not start within {timeout}. LogScale will retry starting the query. has been fixed to show the actual timeout instead of just {timeout}.

    • In the emails sent by email actions, the text Open in Humio has been replaced by Open in LogScale.

  • Dashboards and Widgets

    • Dashboard parameter suggestions of the FixedList Parameter type now follow the order in which they were configured.

      Dashboard parameter suggestions of the Query Parameter type now follow the order of the query result.

  • Ingestion

    • Data ingest rate monitoring has been adjusted to ensure it reports from nodes across all node roles. Additionally, the number of nodes reporting in large clusters has been raised.

  • Queries

    • Some internal improvements have been made to query coordination to make it more robust in certain cases — in particular with failing queries — with an impact on the timing of some API responses.

    • Some internal improvements have been made to query caching and cache distribution.

    • The enforcement of the limit on off-heap buffers for segments being queried has been tightened: the limit should no longer exceed the size required for reading a single segment, even in cases where the scheduler is very busy.

Falcon LogScale 1.159.0 GA (2024-10-08)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.159.0GA2024-10-08

Cloud

2025-10-31No1.112No

Available for download two days after release.

Bug fixes and updates.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Ingestion

    • Falcon LogScale now improves decision-making around which segments a digest leader fetches as part of taking over leadership. This should reduce the incidence of small bits of data being replayed from Kafka unnecessarily, and may also reduce how often reassignment will trigger a restart of live queries.

      For more information, see Ingestion: Digest Phase.

New features and improvements

  • Security

    • For multiple configured SAML IdP certificates, Falcon LogScale now enforces that at least one of them is valid and not expired. This prevents login failures that have occurred due to the expiration of one of the certificates.

      For more information, see Certificate Rotation.

    • Purpose of the repository&view permission ChangeTriggers has changed: it is now intended for creating, deleting and updating alerts and scheduled searches. This permission is no longer needed to view alerts and scheduled searches in read-only mode from theAlerts page: instead, the ReadAccess permission is required for that.

    • Creating roles that have an empty set of permissions is now supported in the role-permissions.json file file. To allow this, add the following line to the file:

      JAVASCRIPT
      "options": { "allowRolesWithNoPermissions": true
      }

      This ensures compatibility when migrating from previous view-group-permissions.json file, should this contain roles without permissions.

      For more information, see Setting up Roles in a File.

  • Configuration

    • Kafka resets described at Switching Kafka do no longer occur by default. In order to provide safeguard against accidental misconfiguration, the ALLOW_KAFKA_RESET_UNTIL_TIMESTAMP_MS environment variable has been added, which per default ensures that Kafka resets are not allowed. With this variable unset, accidental Kafka resets are avoided until an administrator assents to having a Kafka reset performed.

      To intentionally perform a Kafka reset, administrators should set ALLOW_KAFKA_RESET_UNTIL_TIMESTAMP_MS to an epoch timestamp in near future (for instance now + one hour), which will make sure that the setting is automatically disabled again once the reset is complete.

      For more information, see ALLOW_KAFKA_RESET_UNTIL_TIMESTAMP_MS.

  • Queries

    • Nested repetitions/quantifiers in the Falcon LogScale Regular Expression Engine v2 are now supported. Nested repetitions are constructions that repeat or quantify another regex expression that contains repetition/quantification. For instance, the regex:

      /(?<ipv4>(?:\d{1,3}\.){3}\d{1,3})/

      makes use of nested repetitions, namely:

      (?:\d{1,3}\.){3}

      For more information, see LogScale Regular Expression Engine V2.

    • Introducing a regex backtracking limit of 0,5 seconds pr. input for the Falcon LogScale Regex Engine v2. As soon as the regex starts backtracking to find matches, it is timed and cancelled if the backtracking to find a match exceeds 0.5 seconds. This is done to avoid instances of practically infinite backtracking, as can be the case with some regexes.

      For more information, see LogScale Regular Expression Engine V2.

    • Under the hood changes to how the size of certain events is estimated should now make query state size estimates more realistic.

  • Functions

    • Introducing the new query function coalesce(). This function accepts a list of fields and returns the first value that is not null or empty. Empty values can also be returned by setting a parameter in the function.

      For more information, see coalesce().

    • Introducing the new query function array:drop(). This function drops all consecutive fields of a given array, starting from index 0.

      For more information, see array:drop().

Fixed in this release

  • Queries

    • Building tables for a query would block other tables from being built due to an internal cache implementation behaviour, which has now been fixed.

Early Access

  • Security

    • It is now possible to map one IdP group name to multiple Falcon LogScale groups during group synchronization. Activate the OneToManyGroupSynchronization feature flag for this functionality. With the feature flag enabled, Falcon LogScale will map a group name to all Falcon LogScale groups in the organization that have a matching lookupName or displayName, while also performing validation for identical groups. If the multiple mapping feature is not enabled, the existing one-to-one mapping functionality remains unchanged.

      For more information on how feature flags are enabled, see Enabling & Disabling Feature Flags.

      For more information, see Group Synchronization.

Falcon LogScale 1.158.0 GA (2024-10-01)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.158.0GA2024-10-01

Cloud

2025-10-31No1.112No

Available for download two days after release.

Bug fixes and updates.

Deprecation

Items that have been deprecated and may be removed in a future release.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Queries

    • When a digest node is unavailable, a warning is not attached to queries, but the queries are allowed to proceed.

      This way, the behaviour of a query is similar to the case where a segment cannot be searched, due to all the owning nodes being unavailable at the time of the query.

Upgrades

Changes that may occur or be required during an upgrade.

  • Installation and Deployment

    • Bundled JDK is now upgraded to Java 23.

New features and improvements

  • Security

  • UI Changes

    • The logging for LogScale Multi-Cluster Search network requests have been improved by adding new endpoints that have the externalQueryId in the path and the federationId in a query parameter.

    • The proxy endpoints for LogScale Multi-Cluster Search have changed. Specific internal marked endpoints that match the external endpoints for proxying are added. This will improve the ability to track multi-cluster searches in the LogScale requests log.

  • Documentation

    • The naming structure and identification of release types has been updated. LogScale is available in two release types:

      • Generally Available (GA) releases — includes new functionality. Using a GA release gets you access to the latest features and functionality.

        GA releases are deployed in LogScale SaaS environments.

      • Long Term Support (LTS) releases — contains the latest features and functionality.

        LogScale on-premise customers are advised to install the LTS releases. LTS releases are provided approximately every six weeks.

        Security fixes are applied to the last three LTS releases.

  • Configuration

    • The new dynamic configuration parameter ParserBacktrackingLimit has been added to govern how many new events can be created from a single input event in parsers.

      This was previously controlled by the QueryBacktrackingLimit configuration parameter, which now applies only to queries, thus allowing for finer control.

  • Queries

    • LogScale Regular Expression Engine v2 now improves the optimizer ability to make alternations into decision trees.

      For more information, see LogScale Regular Expression Engine V2.

    • Added optimizations for start-of-text regex expressions with LogScale Regular Expression Engine v2. In particular:

      /^X/

      and:

      /\AX/

      no longer try to match all positions in the string.

      When doing tests on large body of text, these optimizations have proven to be faster and shown improvements of ~202%, for example when tested against a collection of works by Mark Twain.

      For more information, see LogScale Regular Expression Engine V2.

Fixed in this release

  • UI Changes

    • A minor UI issue in dropdown windows has been fixed e.g., the Time interval window popping up from the Time Selector would close if any text inside the window fields was selected and the mouse click was released outside the window.

  • Dashboards and Widgets

    • The tooltip description of a widget would be cut off if the widget took up the whole row. This issue has now been fixed.

  • Ingestion

    • When creating a new event forwarding rule, the editor could not be editable in some cases. This issue has now been fixed.

  • Functions

    • A regression issue has been fixed in the match() function in cidr mode, which was significantly slower when doing submission of the query.

Early Access

Improvement

  • Automation and Alerts

    • The error message The alert query did not start within {timeout}. LogScale will retry starting the query. has been fixed to show the actual timeout instead of just {timeout}.

    • In the emails sent by email actions, the text Open in Humio has been replaced by Open in LogScale.

  • Dashboards and Widgets

    • Dashboard parameter suggestions of the FixedList Parameter type now follow the order in which they were configured.

      Dashboard parameter suggestions of the Query Parameter type now follow the order of the query result.

Falcon LogScale 1.157.0 GA (2024-09-24)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.157.0GA2024-09-24

Cloud

2025-10-31No1.112No

Available for download two days after release.

Bug fixes and updates.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

  • The deprecated JDK-less server.tar.gz tarball release is no longer being published. Users should switch to either server-linux_x64.tar.gz or server-alpine_x64.tar.gz depending on their operating system.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Storage

    • LogScale now avoids moving mini-segments to follow the digest nodes if the mini-segments are available in Bucket Storage. Instead, mini-segments will now be fetched as needed, when the digest leader is ready to merge them. This reduces the load on Global Database in some cases following a digest reassignment.

    • During digest reassignment, LogScale will now ignore mini-segments in Bucket Storage when deciding whether to switch merge targets because some mini-segments are not present locally. This should slightly reduce the load on Global Database during digest reassignment.

    • Allow live query updates to run on a new thread pool digestLive, but only for datasources that spend more time on these updates than allowed in the digester pool on live queries, or for many datasources, if their total load exceeds time available for digesters. This frees up time for the digesters, provided there is available CPU on the node.

    • LogScale now avoids moving merge targets to the digest leader during digest reassignment if those segments are already in Bucket Storage.

New features and improvements

  • GraphQL API

    • Field aliases now have API support for being exported and imported as YAML.

Fixed in this release

  • Dashboards and Widgets

    • Newline characters would not be escaped in the dashboard parameter input field, thus appearing as not being part of the value. This issue has now been fixed.

  • Queries

    • Stopping queries that use early stopping criteria were wrongly reported as Cancelled instead of Done. The query status has now been fixed.

  • Other

    • Off-heap memory limiting might not apply correctly.

Known Issues

  • Queries

    • A known issue in the implementation of the match() function when using cidr option in the mode parameter, could cause a reduction in performance for the query, and block other queries from executing.

Improvement

  • Ingestion

    • Data ingest rate monitoring has been adjusted to ensure it reports from nodes across all node roles. Additionally, the number of nodes reporting in large clusters has been raised.

  • Queries

    • Some internal improvements have been made to query coordination to make it more robust in certain cases — in particular with failing queries — with an impact on the timing of some API responses.

Falcon LogScale 1.156.0 GA (2024-09-17)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.156.0GA2024-09-17

Cloud

2025-10-31No1.112No

Available for download two days after release.

Bug fixes and updates.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

Upgrades

Changes that may occur or be required during an upgrade.

  • Installation and Deployment

    • Upgraded the Kafka clients to 3.8.0.

New features and improvements

Fixed in this release

  • UI Changes

    • The OIDC and SAML configuration pages under Organization settings have been fixed due to a tooltip containing a link that would close before users could click the link.

    • Entering new arguments for Multi-value Parameters in Dashboard Link would not actually insert the new argument into the list of arguments. This issue has now been fixed.

    • Suggestions for parameter values in the Interactions panel would not be able to find fields in the query result. This issue has now been fixed.

  • Storage

    • An issue has been fixed which could cause clusters with too few hosts online to reach the configured segment replication factor to run segment rebalancing repeatedly.

      The rebalancing now disables itself in such a situation, until enough nodes come back online that rebalancing will actually be able to reach the replication factor.

  • Queries

    • A regression issue that occurred in LogScale version 1.142.0 has now been fixed: it could cause LogScale to exceed the limit on off-heap memory when running many queries concurrently.

      Queries hitting the limit on off-heap memory could be deprioritized more strongly than intended. This issue has now been fixed.

  • Other

    • A regression issue where some uploaded files close to 2GB could fail to load has now been fixed.

Known Issues

  • Queries

    • A known issue in the implementation of the match() function when using cidr option in the mode parameter, could cause a reduction in performance for the query, and block other queries from executing.

Improvement

  • UI Changes

    • The Amazon S3 archiving UI page now correctly points to the S3 Archiving documentation pages versioned for Self-Hosted and Cloud.

  • Queries

    • The enforcement of the limit on off-heap buffers for segments being queried has been tightened: the limit should no longer exceed the size required for reading a single segment, even in cases where the scheduler is very busy.

Falcon LogScale 1.155.0 GA (2024-09-10)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.155.0GA2024-09-10

Cloud

2025-10-31No1.112No

Available for download two days after release.

Bug fixes and updates.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Automation and Alerts

  • Functions

    • Prior to LogScale v1.147, the array:length() function accepted a value in the array argument that did not contain brackets [ ] so that array:length("field") would always produce the result 0 (since there was no field named field). The function has now been updated to properly throw an exception if given a non-array field name in the array argument. Therefore, the function now requires the given array name to have [ ] brackets, since it only works on array fields.

New features and improvements

  • Security

    • View permissions to allow for creating different types of assets in a view have been added.

      For instance granting a user the CreateFiles permission in a view, will allow the user to create new files, but not edit existing files.

      These permissions can currently only be assigned using the LogScale GraphQL API.

      For more information, see Repository & View Permissions.

  • UI Changes

    • The maximum number of fields that can be added in a Field Aliasing schema has been increased from 50 to 1,000.

  • GraphQL API

    • Add a new GraphQL API for getting non-default buckets storage configurations for organizations onDefaultBucketConfigs. The intended use is to help managing a fleet of LogScale clusters.

  • Functions

Fixed in this release

  • UI Changes

  • Automation and Alerts

    • The severity of log message Alert found no results and will not trigger for Aggregate Alerts has been adjusted from Warning to Info.

Known Issues

  • Queries

    • A known issue in the implementation of the match() function when using cidr option in the mode parameter, could cause a reduction in performance for the query, and block other queries from executing.

Improvement

  • Queries

    • Some internal improvements have been made to query caching and cache distribution.

Falcon LogScale 1.154.0 GA (2024-09-03)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.154.0GA2024-09-03

Cloud

2025-10-31No1.112No

Available for download two days after release.

Bug fixes and updates.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The server.tar.gz release artifact has been deprecated. Users should switch to the OS/architecture-specific server-linux_x64.tar.gz or server-alpine_x64.tar.gz, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.

    We are making this change for the following reasons:

    • By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.

    • Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.

    • By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.

    The last release where server.tar.gz artifact is included will be 1.154.0.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Functions

    • Prior to LogScale v1.147, the array:length() function accepted a value in the array argument that did not contain brackets [ ] so that array:length("field") would always produce the result 0 (since there was no field named field). The function has now been updated to properly throw an exception if given a non-array field name in the array argument. Therefore, the function now requires the given array name to have [ ] brackets, since it only works on array fields.

New features and improvements

  • UI Changes

  • GraphQL API

    • Introducing the view field on GraphQL FileEntry type, accessible through the entitiesSearch field.

  • Configuration

    • Mini-segments auto-tune their max block count, up to their limit from configuration. This allows bigger minis for fast datasources, which reduces the number of minis in the global change stream.

  • Dashboards and Widgets

    • Improved user experience for creating and configuring dashboards parameters, providing immediate feedback when the setup changes and improved error validation.

      • Saving changes in parameters settings does not require an additional step to apply the changes before saving the dashboard, making it consistent with saving all other dashboard configurations.

      • Changes in the Parameters settings side panel now give immediate feedback on the dashboard.

      • Errors in the parameters setup are now validated on dashboard save, informing users about identified issues.

      • In the Query Parameter type, the Query String field has been replaced with LogScale Query editor, providing rich query writing experience as well as syntax validation.

      • In the File Parameter type, additional validation was added to display a warning if the lookup file used as a source of suggestions was deleted.

      • Parameters have now additional states (error, warning, info) informing users about issues with the setup.

    • Added the ability to move dashboards parameters to a parameter panel from the configuration side panel.

  • Queries

    • Added support for using the new experimental LogScale Regular Expression Engine v2 by specifying the F flag, for example:

      logscale Syntax
      '/foo/F'

      The new engine is currently under development and while it can be faster in some cases, there may also be cases where it is slower.

      For more information, see LogScale Regular Expression Engine V2.

      • Query warnings are now included in the activity logs for queries

      • When a query is rejected due to a validation exception, an activity log is added

      • Activity logs for queries are now generated for LogScale Self-Hosted

Fixed in this release

  • Ingestion

    • Fixed issues related to searching for ingest timestamp:

      • Issues with the usage of the query state cache when searching by ingest timestamp.

      • Reject queries where query time interval starts before the UNIX epoch. This applies both when searching by ingest timestamp or event timestamp. Previously such a query by ingest timestamp would cause an error, but a query by event timestamp was allowed, but not useful as all events in LogScale have event timestamps after the UNIX epoch.

      • When searching by ingest timestamp, start() and end() functions now report the correct search range.

      • Use event timestamp in place of ingest timestamp if the latter is missing. In old versions of LogScale (prior to 1.15) ingest timestamp was not stored with events. In order to support correct filtering when searching via ingest timestamp also for such old data, LogScale now considers the event timestamp to be also the ingest timestamp.

  • Log Collector

    • Fixed a performance issue when sorting by config name in the Fleet Management overview which could result in 503s from the backend.

  • Queries

    • Fixed stale QuerySessions that could cause invalid queries to be re-used.

    • Query poll would not be re-tried on dashboards if the request timed out.

  • Functions

    • Fixed some cases where writeJson() would output fields as numbers that are not supported by the JSON standard. These fields are now represented by strings in the output to ensure that the resulting JSON is valid.

Known Issues

  • Queries

    • A known issue in the implementation of the match() function when using cidr option in the mode parameter, could cause a reduction in performance for the query, and block other queries from executing.

Falcon LogScale 1.153.4 LTS (2024-12-17)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.153.4LTS2024-12-17

Cloud

2025-09-30Yes1.112No

Hide file hashes

Show file hashes

Download

These notes include entries from the following previous releases: 1.153.1, 1.153.3

Bug fixes and updates.

Breaking Changes

The following items create a breaking change in the behavior, response or operation of this release.

  • Functions

    • Calling the match() function with multiple columns now finds the last matching row in the file. This now aligns with the behavior of calling the same function with a single column.

      For more information, see match().

Removed

Items that have been removed as of this release.

Installation and Deployment

  • The previously deprecated jar distribution of LogScale (e.g. server-1.117.jar) is no longer published starting from this version. For more information, see Falcon LogScale 1.130.0 GA (2024-03-19).

  • The previously deprecated humio/kafka and humio/zookeeper Docker images are now removed and no longer published.

API

  • The following previously deprecated KAFKA API endpoints have been removed:

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment

    • GET /api/v1/clusterconfig/kafka-queues/partition-assignment

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults

    • GET /api/v1/clusterconfig/kafka-queues/partition-assignment/id

Configuration

Other

  • Unnecessary digest-coordinator-changes and desired-digest-coordinator-changes metrics have been removed. Instead, the logging in the IngestPartitionCoordinator class has been improved, to allow monitoring of when reassignment of desired and current digesters happens — by searching for Wrote changes to desired digest partitions / Wrote changes to current digest partitions.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The server.tar.gz release artifact has been deprecated. Users should switch to the OS/architecture-specific server-linux_x64.tar.gz or server-alpine_x64.tar.gz, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.

    We are making this change for the following reasons:

    • By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.

    • Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.

    • By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.

    The last release where server.tar.gz artifact is included will be 1.154.0.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Installation and Deployment

    • The default cleanup.policy for the transientChatter-events topic has been switched from compact to delete,compact. This change will not apply to existing clusters. Changing this setting to delete,compact via Kafka's command line tools is particularly recommended if transientChatter is taking up excessive space on disk, whereas it is less relevant in production environments where Kafka's disks tend to be large.

  • Automation and Alerts

    • Aggregate and filter alert types now both display an Error (red) status if starting the alert query times out after 1 minute.

      For more information on alert statuses, see Monitoring Alerts.

  • Storage

    • Reduced the waiting time for redactEvents background jobs to complete.

      The background job will not complete until all mini-segments affected by the redaction have been merged into full segments. The job was pessimistically waiting for MAX_HOURS_SEGMENT_OPEN (30 days) before attempting the rewrite. This has been changed to wait for FLUSH_BLOCK_SECONDS (15 minutes) before attempting the rewrite, this means, while some mini-segments may not be rewritten for 30 days, it is uncommon. If a rewrite is attempted and encounters mini-segments, it is postponed and retried later.

      For more information, see Redact Events API.

  • Configuration

    • When global publish to Kafka times out from digester threads, the system would initiate a failure shutdown. Instead, from this 1.144 version the system retries the publish to Global Database indefinitely for those specific global transactions that originate in a digester thread. If retries occur, these get logged with an error executeTransactionRetryingOnTimeout: unable to execute transaction for global, retrying.

    • Autoshards no longer respond to ingest delay by default, and now support round-robin instead.

  • Functions

    • Prior to LogScale v1.147, the array:length() function accepted a value in the array argument that did not contain brackets [ ] so that array:length("field") would always produce the result 0 (since there was no field named field). The function has now been updated to properly throw an exception if given a non-array field name in the array argument. Therefore, the function now requires the given array name to have [ ] brackets, since it only works on array fields.

Upgrades

Changes that may occur or be required during an upgrade.

  • Installation and Deployment

    • The minimum version of Java compatible with LogScale is now 21. Docker users, and users installing the release artifacts that bundle the JDK, are not affected.

      It is recommended to switch to the release artifacts that bundle a JDK, because LogScale no longer supports bringing your own JDK as of release 1.138, see Falcon LogScale 1.138.0 GA (2024-05-14)

    • The JDK has been upgraded to 23.0.1

New features and improvements

  • Security

    • When extending Retention span or size, any segments that were marked for deletion — but where the files remain in the system — are automatically resurrected. How much data you reclaim via this depends on the backupAfterMillis configuration on the repository.

      For more information, see Audit Logging.

  • Installation and Deployment

    • The Docker containers have been configured to use the following environment variable values internally: DIRECTORY=/data/humio-data HUMIO_AUDITLOG_DIR=/data/logs HUMIO_DEBUGLOG_DIR=/data/logs JVM_LOG_DIR=/data/logs JVM_TMP_DIR=/data/humio-data/jvm-tmp This configuration replaces the following chains of internal symlinks, which have been removed:

      • /app/humio/humio/humio-data to /app/humio/humio-data

      • /app/humio/humio-data to /data/humio-data

      • /app/humio/humio/logs /app/humio/logs

      • /app/humio/logs to /data/logs

      This change is intended for allowing the tool scripts in /app/humio/humio/bin to work correctly, as they were previously failing due to the presence of dangling symlinks when invoked via docker run if nothing was mounted at /data.

  • UI Changes

    • LogScale administrators can now set the default timezone for their users.

      For more information, see Setting Time Zone.

    • When exporting data to CSV, the Export to File dialog now offers the ability to select field names that are suggested based on the query results, or to select all fields in one click.

      For more information, see Exporting Data.

    • The Time Interval panel now displays the @ingesttimestamp/@timestamp options selected when querying events for Aggregate Alerts.

      For more information, see Changing Time Interval.

    • A new timestamp column has been added in the Event list displaying the alert timestamp selected (@ingesttimestamp or @timestamp). This will show as the new default column along with the usual @rawstring field column.

      For more information, see Alert Properties.

    • When a file is referenced in a query, the Search page now shows a new tab next to the Results and Events tabs, bearing the name of the uploaded file. Activating the file tab will fetch the contents of the file and will show them as a Table widget. Alternatively, if the file cannot be queried, a download link will be presented instead.

      For more information, see Creating a File.

    • Sections can now be created inside dashboards, allowing for grouping relevant content together to maintain a clean and organized layout, making it easier for users to find and analyze related information. Sections can contain data visualizations as well as Parameter Panels. Additionally, they offer more flexibility when using the Time Selector, enabling users to apply a time setting across multiple widgets.

      For more information, see Sections.

    • The Users page has been redesigned so that the Repository and view roles are displayed in a right hand side panel which opens when a repository or view is selected. The repository and views roles panel shows the roles that give permissions to the user for the selected repository or view, together with groups that apply to them and the corresponding query prefixes.

      For more information, see Manage Users.

    • An organization administrator can now update a user's role on a repository or view from the Users page.

      For more information, see Manage User Roles.

    • The design of the file editor for Lookup Files has been improved. The editor is now also more responsive and has support for tab navigation.

    • The Client type item in the Query details tab has been removed. Previously, Dashboard was incorrectly displayed as the value for both live dashboard and alert query types.

      For more information, see Query Monitor — Query Details.

    • In Organization settings, layout changes have been made to the Groups page for viewing and updating repository and view permissions on a group.

    • UI workflow updates have been made in the Groups page for managing permissions and roles.

      For more information, see Manage Groups.

  • Automation and Alerts

    • A maximum limit of 1 week has been added on the throttle period for Filter Alerts and Standard Alerts. Any existing alert with a higher throttle time will continue to run, but when edited, lowering the throttle time to 1 week at most will be required.

    • Standard Alerts have been renamed to Legacy Alerts. It is recommended using Filter Alerts or Aggregate Alerts alerts instead of legacy alerts.

      For more information, see Alerts.

    • The {action_invocation_id} message template has been added: it contains a unique id for the invocation of the action that can be correlated with the activity logs.

      For more information, see Message Templates and Variables, Monitoring Alert Execution through the humio-activity Repository.

    • It is no longer possible to use @id as throttle field in filter alerts, as this has no effect. Any existing filter alerts with @id as throttle field will continue to run, but the next time the filter alert is updated, the throttle field must be changed or removed.

      For more information, see Field-Based Throttling.

    • Audit logs for Alerts and Scheduled Searches now contain the package, if installed from a package.

    • The following UI changes have been introduced for alerts:

      • The Alerts overview page now presents a table with search and filtering options.

      • An alert-specific version of the Search page is now available for creating and refining your query before saving it as an alert.

      • The alert's properties are opened in a side panel when creating or editing an alert.

      • In the side panel, the recommended alert type to choose is suggested based on the query.

      • For aggregate alerts, the side panel allows you to select the timestamp (@ingesttimestamp or @timestamp).

      For more information, see Creating Alerts, Alert Properties.

    • A new Disabled actions status is added and can be visible from the Alerts overview table. This status will be displayed when there is an alert (or scheduled search) with only disabled actions attached.

      For more information, see Alerts Overview.

    • Audit logs for Filter Alerts now contain the language version of the alert query.

    • A new aggregate alert type is introduced. The aggregate alert is now the recommended alert type for any queries containing aggregate functions. Like filter alerts, aggregate alerts use ingest timestamps and run back-to-back searches, guaranteeing at least once delivery to the actions for more robust results, even in case of ingest delays of up to 24 hours.

      For more information, see Aggregate Alerts.

    • The following adjustments have been made for Scheduled PDF Reports:

      • If the feature is disabled for the cluster, then the Scheduled reports menu item under Automation will not show.

      • If the feature is disabled or the render service is in an error state, users who are granted with the ChangeScheduledReport permission and try to access, will be presented with a banner on the Scheduled reports overview page.

      • The permissions overview in the UI now informs that the feature must be enabled and configured correctly for the cluster, in order for the ChangeScheduledReport permission to have any effect.

    • Users can now see warnings and errors associated to alerts in the Alerts page opened in read-only mode.

  • GraphQL API

    • The new environmentVariableUsage() GraphQL API has been introduced for listing non-secret environment variables used by a node. This is intended as an aid to help do configuration discovery when managing a large number of LogScale clusters.

    • The getFileContent() and newFile() GraphQL endpoint responses will change for empty files. The return type is still UploadedFileSnapshot!, but the lines field will be changed to return [] when the file is empty. Previously, the return value was a list containing an empty list [[]]. This change applies both for empty files, and when the provided filter string doesn't match any rows in the file.

    • The log line containing Executed GraphQL query in the humio repository, that is logged for every GraphQL call, now contains the name of the mutations and queries that are executed.

    • The new concatenateQueries() GraphQL API has been introduced for programmatically concatenating multiple queries into one. This is intended to eliminate errors that might occur if queries are combined naively.

    • The preview tag has been removed from the following GraphQL mutations:

    • The stopStreamingQueries() GraphQL mutation is no longer in preview.

    • The getFileContent()GraphQL query will now filter CSV file rows case insensitively and allow partial text matches. This happens when filterString input argument is provided. This makes it possible to search for rows without knowing the full column values, and while ignoring the case.

    • The defaultTimeZone GraphQL field on the UserSettings GraphQL type no longer defaults to the organisation default time zone if the user has no default time zone set. To get the default organization time zone through the API, use the defaultTimeZone field on the OrganizationConfigs GraphQL type.

    • The new startFromDateTime argument has been added to s3ConfigureArchiving GraphQL mutation. When set, S3Archiving does not consider segment files that have a start time that is before this point in time. This in particular allows enabling S3 archiving only from a point in time and going forward, without archiving all the older files too.

    • A new field named searchUsers has been added on the group() output type in graphql, which is used to search users in the group. The field also allows for pagination, ordering and sorting of the result set.

  • Storage

    • An alternative S3 client is now available and enabled by default. It handles file uploads more efficiently, by setting the Content-MD5 header during upload thus allowing S3 to perform file validation instead of having LogScale do it via post-upload validation steps. This form of validation should work for all uploads, including when server-side encryption is enabled. The new S3 client only supports this validation mode, so setting the following variables will have no effect:

      In case of issues, the S3 client can be disabled by setting USE_AWS_SDK=false, which will set LogScale back to the previous default client. Should you need to do this, please reach out to Support to have the issue addressed, because the previous client will be deprecated and removed eventually.

    • Support for bucket storage upload validation has changed. LogScale now supports the following three validation modes:

      • Checking the ETag HTTP response header on the upload response. This mode is the default, and can be opted out of via the BUCKET_STORAGE_IGNORE_ETAG_UPLOAD configuration parameter.

      • Checking the ETag HTTP response header on a HEAD request done for the uploaded file. This is the second preferred mode, and can be opted out of via the BUCKET_STORAGE_IGNORE_ETAG_AFTER_UPLOAD configuration parameter.

      • Downloading the file that was uploaded, in order to validate the checksum file. This mode is enabled if neither of the other modes are enabled.

      Previous validation modes that did not compare checksums have been removed, as they were not reliable indicators of the uploaded file integrity.

    • The size of the queue for segments being uploaded to bucket storage has been increased. This reduces how often a scan global for changes is needed.

      For more information, see Bucket Storage.

    • For better efficiency, more than one object is now deleted from Bucket Storage per request to S3 in order to reduce the number of requests to S3.

    • Support is implemented for returning a result over 1GB in size on the queryjobs endpoint. There is now a limit on the size of 8GB of the returned result. The limits on state sizes for queries remain unaltered, so the effect of this change is that some queries that previously failed returning their results due to reaching 1GB, even though the query completed, now work.

  • API

  • Configuration

    • A new dynamic configuration variable GraphQlDirectivesAmountLimit has been added to restrict how many GraphQL directives can be in a query. Valid values are integers from 5 to 1,000. The default value is 25.

    • The QueryBacktrackingLimit feature is now enabled by default. The default value for the max number of backtracks (number of times a single event can be processed) a query can do has been reduced to 2,000.

    • Adjusted launcher script handling of the CORES environment variable:

      If CORES is set, the launcher will now pass -XX:ActiveProcessorCount=$CORES to the JVM. If CORES is not set, the launcher will pass -XX:ActiveProcessorCount to the JVM with a value determined by the launcher. This ensures that the core count configured for LogScale is always same as the core count configured for internal JVM thread pools.

      -XX:ActiveProcessorCount will be ignored if passed directly via other environment variables, such as HUMIO_OPTS. Administrators currently configuring their clusters this way should remove -XX:ActiveProcessorCount from their variables and set CORES instead.

    • The default retention.bytes has been modified for global topic from 1 GB to 20 GB. This is applied only when the topic is being created by LogScale initially. For existing clusters you should raise retention on the global topic so that it has room for at least a few hours of flow. This is only relevant for large clusters, as small clusters do not produce enough to exceed 1 GB per few hours. It is ideal to have room for at least 1 day in the global topic for better resilience against large spikes in traffic combined with losing global snapshot files.

    • Cluster-wide configuration of S3 Archiving is introduced, in addition to the existing repo-specific configurations. This feature allows the cluster admin to setup archiving to a (single) bucket for a subset of repositories on the cluster, fully independent of the S3 Archiving available to end users via the UI. This feature adds the following new configuration parameters:

      • S3_CLUSTERWIDE_ARCHIVING_ACCESSKEY (required)

      • S3_CLUSTERWIDE_ARCHIVING_SECRETKEY (required)

      • S3_CLUSTERWIDE_ARCHIVING_REGION (required)

      • S3_CLUSTERWIDE_ARCHIVING_BUCKET (required)

      • S3_CLUSTERWIDE_ARCHIVING_PREFIX (defaults to empty string)

      • S3_CLUSTERWIDE_ARCHIVING_PATH_STYLE_ACCESS(default is false)

      • S3_CLUSTERWIDE_ARCHIVING_KMS_KEY_ARN

      • S3_CLUSTERWIDE_ARCHIVING_ENDPOINT_BASE

      • S3_CLUSTERWIDE_ARCHIVING_WORKERCOUNT (default is cores/4)

      • S3_CLUSTERWIDE_ARCHIVING_USE_HTTP_PROXY (default is false)

      • S3_CLUSTERWIDE_ARCHIVING_IBM_COMPAT (default is false)

      Most of these configuration variables work like they do for S3 Archiving, except that the region/bucket is selected here via configuration, and not dynamically by the end users, and also that the authentication is via explicit accesskey and secret, and not via IAM roles or any other means.

      The following dynamic configurations are added for this feature:

      • S3ArchivingClusterWideDisabled (defaults to false when not set) — allows temporarily pausing the archiving in case of issues triggered by, for example, the traffic this creates.

      • S3ArchivingClusterWideEndAt and S3ArchivingClusterWideStartFrom — timestamps in milliseconds of the "cut" that selects segment files and events in them to include. When these configuration variables are unset (which is the default) the effect is to not filter by time.

      • S3ArchivingClusterWideRegexForRepoName (defaults to not match if not set) — the repository name regex must be set in order to enable the feature. When set, all repositories that have a name that matches the regex (unanchored) will be archived using the cluster wide configuration from this variable.

  • Ingestion

    • On the Code page accessible from the Parsers menu when writing a new parser, the following validation rules have been added globally:

      • Arrays must be contiguous and must have a field with index 0. For instance, myArray[0] := "some value"

      • Fields that are prefixed with # must be configured to be tagged (to avoid falsely tagged fields).

      An error is displayed on the parser Code page if the rules above are violated. This error will not appear during actual parsing.

      For more information, see Creating a New Parser.

    • To avoid exporting redundant fields in the parsers, LogScale will now omit YAML fields with a null value when exporting YAML templates — even when such fields are contained inside a list. Omitting fields with a null value previously only happened for fields outside a list.

  • Log Collector

    • RemoteUpdate version dialog has been improved, with the ability to cancel pending and scheduled updates.

  • Functions

    • Matching on multiple rows with the match() query function is now supported. This functionality allows match() to emit multiple events, one for each matching row. The nrows parameter is used to specify the maximum number of rows to match on.

      For more information, see match().

    • The match() function now supports matching on multiple pairs of fields and columns.

      For more information, see match().

    • The new query function text:contains() is introduced. The function tests if a specific substring is present within a given string. It takes two arguments: string and substring, both of which can be provided as plain text, field values, or results of an expression.

      For more information, see text:contains().

    • The new query function array:append() is introduced, used to append one or more values to an existing array, or to create a new array.

      For more information, see array:append().

Fixed in this release

  • Falcon Data Replicator

    • Testing new FDR feeds using s3 aliasing would fail for valid credentials. This issue has now been fixed.

  • UI Changes

    • The Query Monitor page would show queries running on @ingesttimestamp as running on a search interval over all time. This wrong behavior has been fixed to show the correct search interval.

    • The event histogram would not adhere to the timezone selected for the query.

    • When managing sessions within an organization, it was not possible to sort active sessions by the Last active timestamp column. This issue has now been fixed.

    • In the Export to File dialog, when using the keyboard to switch between options, a different item than the one selected was highlighted. This issue has now been fixed.

    • A long list of large queries would break the queries' list appearing under the Recent tab by not being updatable. The limit to recent queries has now been set to 30.

      For more information, see Recalling Queries.

    • A race condition in LogScale Multi-Cluster Search has been fixed: a done query with an incomplete result could be overwritten, causing the query to never complete.

    • The dialog to quickly switch to another repository would open when pressing the undo hotkey on Windows machines. This wrong behavior has now been fixed.

    • The dropdown menu for selecting fields used when exporting data to a CSV file was hidden behind the Export to file dialog. This issue has now been fixed.

    • The Organizations overview page has been fixed as the Volume column width within a specific organization could not be adjusted.

    • The display of Lookup Files metadata in the file editor for very long user names has now been fixed.

    • The settings used to disable automatic searching would not be respected when creating a new alert. This issue has now been fixed.

    • When Creating a File, saving an invalid .csv file was possible in the file editor. This wrong behavior has now been fixed.

    • The Export to file dialog used when Exporting Data has been fixed as CSV fields input would in some cases not be populated with all fields.

    • Fixing a visualization issue where the values in a multi-select combo box could overlap with the number of selected items.

    • When clicking to sort the Sessions based on Last active, the sorting was wrongly based on Login time instead. This issue has now been fixed.

    • It was not possible to sort by columns other than ID in the Cluster nodes table under the Operations UI menu. This issue has now been fixed.

  • Automation and Alerts

    • Actions would show up as scheduled searches and vice versa when viewing the contents of a package. This issue has now been fixed.

    • Fixed an issue where queries that were failing would never complete. This could cause Alerts and Scheduled Searches to hang.

    • Scheduled Searches would not always log if runs were skipped due to being behind. This issue has been fixed now.

    • The read-only alert page would wrongly report that actions were being throttled when a filter alert had disabled throttling. This issue has now been fixed.

  • GraphQL API

    • The getFileContent() GraphQL endpoint will now return an UploadedFileSnapshot! datatype with the field totalLinesCount: 0 when a file has no matches for a given filter string. Previously it would return the total number of lines in the file.

    • The background processing underlying the redactEvents() mutation would fail if the filter included tags. This error has now been fixed.

  • Storage

    • Throttling for bucket uploads/downloads has been fixed as it could cause unintentionally high number of concurrent uploads or downloads, to the point of exceeding the pool of connections.

    • Notifying to Global Database about file changes could be slow. This issue has now been fixed.

    • Segments could be considered under-replicated for a long time leading to events being retained in Kafka for extended periods. This wrong behavior has now been fixed.

    • Throttling for bucket uploads/downloads could cause unintentionally harsh throttling of downloads in favor of running more uploads concurrently. This issue has now been fixed.

    • Digest threads could fail to start digesting if global is very large, and if writing to global is slow. This issue has now been fixed.

    • The throttling for segment rebalancing has been reworked, which should help rebalancing keep up without overwhelming the cluster.

  • API

  • Configuration

    • Make a value of 1 for BucketStorageUploadInfrequentThresholdDays dynamic configuration result in all uploads to bucket being subject to "S3 Intelligent-Tiering". Some installs want this as they apply versioning to their bucket, so even though the life span as a non-deleted object is short, the actual data remains for much longer in the bucket, and then tiering all objects saves on cost of storage for them. Objects below 128KB are never tiered in any case.

  • Dashboards and Widgets

    • Arguments for parameters no longer used in a deleted query could be submitted anyway when invoking a saved query that uses the same arguments, thus generating an error. This issue has now been fixed.

    • The Table widget has been fixed due to its header appearing transparent.

  • Ingestion

    • Event Forwarding would fail silently if an error occurred while executing the query. This issue has now been fixed.

    • A queryToRead field has been added to the filesUsed property of queryResult to read the data from a file used in a query.

      For more information, see Polling a Query Job.

    • Event Forwarding using match() or lookup() with a missing file would continue to fail after the file was uploaded. This issue has now been fixed.

    • When shutting down a node, the process that load files used by a parser would be stopped before the parser itself. This could lead to ingested events not being parsed. This issue has now been fixed.

    • A wrong order of the output events for parsers have been fixed — the output now returns the correct event order.

  • Log Collector

    • Queries that were nested too deeply would crash LogScale nodes. This issue has now been fixed.

  • Functions

    • parseXml() would sometimes only partially extract text elements when the text contained newline characters. This issue has now been fixed.

    • Parsing the empty string as a number could lead to errors causing the query to fail (in formatTime() function, for example). This issue has now been fixed.

    • The query backtracking limit would wrongly apply to the total number of events, rather than how many times individual events are passed through the query pipeline. This issue has now been fixed.

    • Long running queries using window() could end up never completing. This issue has now been fixed.

    • writeJson() would write invalid JSON by not correctly quoting numbers starting with unary plus or ending with a trailing . (dot).

    • A regression issue has been fixed in the match() function in cidr mode, which was significantly slower when doing submission of the query.

Known Issues

  • Queries

    • A known issue in the implementation of the match() function when using cidr option in the mode parameter, could cause a reduction in performance for the query, and block other queries from executing.

Improvement

  • UI Changes

    • The performance of the query editor has been improved, especially when working with large query results.

  • Automation and Alerts

    • The log field previouslyPlannedForExecutionAt has been renamed to earliestSkippedPlannedExecution when skipping scheduled search executions.

    • The field useProxyOption has been added to Webhooks action templates to be consistent with the other action templates.

    • The severity of a number of alert and scheduled search logs has been changed to better reflect the severity for users.

  • Storage

    • The global topic throughput has been improved for particular updates to segments in datasources with many segments.

      For more information, see Global Database.

    • Let segment merge span vary by +/- 10% of the configured value to avoid all segment targets switching to a new merge targets at the same point in time.

  • Ingestion

    • The input validation on Split by AWS records preprocessing when Set up a New Ingest Feed has been simplified: it will still validate that the incoming file is a single JSON object (and not, for example, multiple newline-delimited JSON objects), but the object may or may not contain a Records array. This resolves an ingest feed issue for CloudTrail with log file integrity enabled. In such cases, the emitted digest files (that does not have the Records array) would halt the ingest feed. These digest files are now ignored.

      For more background information, see this related release note.

    • The Split by AWS records preprocessing when Set up a New Ingest Feed now requires the Records array. This better protects against a situation where mistakenly using this preprocessing step with non-AWS records would interpret the files as empty batches of events, leading notifications in SQS to be deleted without ingesting any events.

  • Queries

Falcon LogScale 1.153.3 LTS (2024-10-02)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.153.3LTS2024-10-02

Cloud

2025-09-30No1.112No

Hide file hashes

Show file hashes

Download

These notes include entries from the following previous releases: 1.153.1

Bug fixes and updates.

Breaking Changes

The following items create a breaking change in the behavior, response or operation of this release.

  • Functions

    • Calling the match() function with multiple columns now finds the last matching row in the file. This now aligns with the behavior of calling the same function with a single column.

      For more information, see match().

Removed

Items that have been removed as of this release.

Installation and Deployment

  • The previously deprecated jar distribution of LogScale (e.g. server-1.117.jar) is no longer published starting from this version. For more information, see Falcon LogScale 1.130.0 GA (2024-03-19).

  • The previously deprecated humio/kafka and humio/zookeeper Docker images are now removed and no longer published.

API

  • The following previously deprecated KAFKA API endpoints have been removed:

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment

    • GET /api/v1/clusterconfig/kafka-queues/partition-assignment

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults

    • GET /api/v1/clusterconfig/kafka-queues/partition-assignment/id

Configuration

Other

  • Unnecessary digest-coordinator-changes and desired-digest-coordinator-changes metrics have been removed. Instead, the logging in the IngestPartitionCoordinator class has been improved, to allow monitoring of when reassignment of desired and current digesters happens — by searching for Wrote changes to desired digest partitions / Wrote changes to current digest partitions.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The server.tar.gz release artifact has been deprecated. Users should switch to the OS/architecture-specific server-linux_x64.tar.gz or server-alpine_x64.tar.gz, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.

    We are making this change for the following reasons:

    • By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.

    • Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.

    • By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.

    The last release where server.tar.gz artifact is included will be 1.154.0.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Installation and Deployment

    • The default cleanup.policy for the transientChatter-events topic has been switched from compact to delete,compact. This change will not apply to existing clusters. Changing this setting to delete,compact via Kafka's command line tools is particularly recommended if transientChatter is taking up excessive space on disk, whereas it is less relevant in production environments where Kafka's disks tend to be large.

  • Automation and Alerts

    • Aggregate and filter alert types now both display an Error (red) status if starting the alert query times out after 1 minute.

      For more information on alert statuses, see Monitoring Alerts.

  • Storage

    • Reduced the waiting time for redactEvents background jobs to complete.

      The background job will not complete until all mini-segments affected by the redaction have been merged into full segments. The job was pessimistically waiting for MAX_HOURS_SEGMENT_OPEN (30 days) before attempting the rewrite. This has been changed to wait for FLUSH_BLOCK_SECONDS (15 minutes) before attempting the rewrite, this means, while some mini-segments may not be rewritten for 30 days, it is uncommon. If a rewrite is attempted and encounters mini-segments, it is postponed and retried later.

      For more information, see Redact Events API.

  • Configuration

    • When global publish to Kafka times out from digester threads, the system would initiate a failure shutdown. Instead, from this 1.144 version the system retries the publish to Global Database indefinitely for those specific global transactions that originate in a digester thread. If retries occur, these get logged with an error executeTransactionRetryingOnTimeout: unable to execute transaction for global, retrying.

    • Autoshards no longer respond to ingest delay by default, and now support round-robin instead.

  • Functions

    • Prior to LogScale v1.147, the array:length() function accepted a value in the array argument that did not contain brackets [ ] so that array:length("field") would always produce the result 0 (since there was no field named field). The function has now been updated to properly throw an exception if given a non-array field name in the array argument. Therefore, the function now requires the given array name to have [ ] brackets, since it only works on array fields.

Upgrades

Changes that may occur or be required during an upgrade.

  • Installation and Deployment

    • The minimum version of Java compatible with LogScale is now 21. Docker users, and users installing the release artifacts that bundle the JDK, are not affected.

      It is recommended to switch to the release artifacts that bundle a JDK, because LogScale no longer supports bringing your own JDK as of release 1.138, see Falcon LogScale 1.138.0 GA (2024-05-14)

New features and improvements

  • Security

    • When extending Retention span or size, any segments that were marked for deletion — but where the files remain in the system — are automatically resurrected. How much data you reclaim via this depends on the backupAfterMillis configuration on the repository.

      For more information, see Audit Logging.

  • Installation and Deployment

    • The Docker containers have been configured to use the following environment variable values internally: DIRECTORY=/data/humio-data HUMIO_AUDITLOG_DIR=/data/logs HUMIO_DEBUGLOG_DIR=/data/logs JVM_LOG_DIR=/data/logs JVM_TMP_DIR=/data/humio-data/jvm-tmp This configuration replaces the following chains of internal symlinks, which have been removed:

      • /app/humio/humio/humio-data to /app/humio/humio-data

      • /app/humio/humio-data to /data/humio-data

      • /app/humio/humio/logs /app/humio/logs

      • /app/humio/logs to /data/logs

      This change is intended for allowing the tool scripts in /app/humio/humio/bin to work correctly, as they were previously failing due to the presence of dangling symlinks when invoked via docker run if nothing was mounted at /data.

  • UI Changes

    • LogScale administrators can now set the default timezone for their users.

      For more information, see Setting Time Zone.

    • When exporting data to CSV, the Export to File dialog now offers the ability to select field names that are suggested based on the query results, or to select all fields in one click.

      For more information, see Exporting Data.

    • The Time Interval panel now displays the @ingesttimestamp/@timestamp options selected when querying events for Aggregate Alerts.

      For more information, see Changing Time Interval.

    • A new timestamp column has been added in the Event list displaying the alert timestamp selected (@ingesttimestamp or @timestamp). This will show as the new default column along with the usual @rawstring field column.

      For more information, see Alert Properties.

    • When a file is referenced in a query, the Search page now shows a new tab next to the Results and Events tabs, bearing the name of the uploaded file. Activating the file tab will fetch the contents of the file and will show them as a Table widget. Alternatively, if the file cannot be queried, a download link will be presented instead.

      For more information, see Creating a File.

    • Sections can now be created inside dashboards, allowing for grouping relevant content together to maintain a clean and organized layout, making it easier for users to find and analyze related information. Sections can contain data visualizations as well as Parameter Panels. Additionally, they offer more flexibility when using the Time Selector, enabling users to apply a time setting across multiple widgets.

      For more information, see Sections.

    • The Users page has been redesigned so that the Repository and view roles are displayed in a right hand side panel which opens when a repository or view is selected. The repository and views roles panel shows the roles that give permissions to the user for the selected repository or view, together with groups that apply to them and the corresponding query prefixes.

      For more information, see Manage Users.

    • An organization administrator can now update a user's role on a repository or view from the Users page.

      For more information, see Manage User Roles.

    • The design of the file editor for Lookup Files has been improved. The editor is now also more responsive and has support for tab navigation.

    • The Client type item in the Query details tab has been removed. Previously, Dashboard was incorrectly displayed as the value for both live dashboard and alert query types.

      For more information, see Query Monitor — Query Details.

    • In Organization settings, layout changes have been made to the Groups page for viewing and updating repository and view permissions on a group.

    • UI workflow updates have been made in the Groups page for managing permissions and roles.

      For more information, see Manage Groups.

  • Automation and Alerts

    • A maximum limit of 1 week has been added on the throttle period for Filter Alerts and Standard Alerts. Any existing alert with a higher throttle time will continue to run, but when edited, lowering the throttle time to 1 week at most will be required.

    • Standard Alerts have been renamed to Legacy Alerts. It is recommended using Filter Alerts or Aggregate Alerts alerts instead of legacy alerts.

      For more information, see Alerts.

    • The {action_invocation_id} message template has been added: it contains a unique id for the invocation of the action that can be correlated with the activity logs.

      For more information, see Message Templates and Variables, Monitoring Alert Execution through the humio-activity Repository.

    • It is no longer possible to use @id as throttle field in filter alerts, as this has no effect. Any existing filter alerts with @id as throttle field will continue to run, but the next time the filter alert is updated, the throttle field must be changed or removed.

      For more information, see Field-Based Throttling.

    • Audit logs for Alerts and Scheduled Searches now contain the package, if installed from a package.

    • The following UI changes have been introduced for alerts:

      • The Alerts overview page now presents a table with search and filtering options.

      • An alert-specific version of the Search page is now available for creating and refining your query before saving it as an alert.

      • The alert's properties are opened in a side panel when creating or editing an alert.

      • In the side panel, the recommended alert type to choose is suggested based on the query.

      • For aggregate alerts, the side panel allows you to select the timestamp (@ingesttimestamp or @timestamp).

      For more information, see Creating Alerts, Alert Properties.

    • A new Disabled actions status is added and can be visible from the Alerts overview table. This status will be displayed when there is an alert (or scheduled search) with only disabled actions attached.

      For more information, see Alerts Overview.

    • Audit logs for Filter Alerts now contain the language version of the alert query.

    • A new aggregate alert type is introduced. The aggregate alert is now the recommended alert type for any queries containing aggregate functions. Like filter alerts, aggregate alerts use ingest timestamps and run back-to-back searches, guaranteeing at least once delivery to the actions for more robust results, even in case of ingest delays of up to 24 hours.

      For more information, see Aggregate Alerts.

    • The following adjustments have been made for Scheduled PDF Reports:

      • If the feature is disabled for the cluster, then the Scheduled reports menu item under Automation will not show.

      • If the feature is disabled or the render service is in an error state, users who are granted with the ChangeScheduledReport permission and try to access, will be presented with a banner on the Scheduled reports overview page.

      • The permissions overview in the UI now informs that the feature must be enabled and configured correctly for the cluster, in order for the ChangeScheduledReport permission to have any effect.

    • Users can now see warnings and errors associated to alerts in the Alerts page opened in read-only mode.

  • GraphQL API

    • The new environmentVariableUsage() GraphQL API has been introduced for listing non-secret environment variables used by a node. This is intended as an aid to help do configuration discovery when managing a large number of LogScale clusters.

    • The getFileContent() and newFile() GraphQL endpoint responses will change for empty files. The return type is still UploadedFileSnapshot!, but the lines field will be changed to return [] when the file is empty. Previously, the return value was a list containing an empty list [[]]. This change applies both for empty files, and when the provided filter string doesn't match any rows in the file.

    • The log line containing Executed GraphQL query in the humio repository, that is logged for every GraphQL call, now contains the name of the mutations and queries that are executed.

    • The new concatenateQueries() GraphQL API has been introduced for programmatically concatenating multiple queries into one. This is intended to eliminate errors that might occur if queries are combined naively.

    • The preview tag has been removed from the following GraphQL mutations:

    • The stopStreamingQueries() GraphQL mutation is no longer in preview.

    • The getFileContent()GraphQL query will now filter CSV file rows case insensitively and allow partial text matches. This happens when filterString input argument is provided. This makes it possible to search for rows without knowing the full column values, and while ignoring the case.

    • The defaultTimeZone GraphQL field on the UserSettings GraphQL type no longer defaults to the organisation default time zone if the user has no default time zone set. To get the default organization time zone through the API, use the defaultTimeZone field on the OrganizationConfigs GraphQL type.

    • The new startFromDateTime argument has been added to s3ConfigureArchiving GraphQL mutation. When set, S3Archiving does not consider segment files that have a start time that is before this point in time. This in particular allows enabling S3 archiving only from a point in time and going forward, without archiving all the older files too.

    • A new field named searchUsers has been added on the group() output type in graphql, which is used to search users in the group. The field also allows for pagination, ordering and sorting of the result set.

  • Storage

    • An alternative S3 client is now available and enabled by default. It handles file uploads more efficiently, by setting the Content-MD5 header during upload thus allowing S3 to perform file validation instead of having LogScale do it via post-upload validation steps. This form of validation should work for all uploads, including when server-side encryption is enabled. The new S3 client only supports this validation mode, so setting the following variables will have no effect:

      In case of issues, the S3 client can be disabled by setting USE_AWS_SDK=false, which will set LogScale back to the previous default client. Should you need to do this, please reach out to Support to have the issue addressed, because the previous client will be deprecated and removed eventually.

    • Support for bucket storage upload validation has changed. LogScale now supports the following three validation modes:

      • Checking the ETag HTTP response header on the upload response. This mode is the default, and can be opted out of via the BUCKET_STORAGE_IGNORE_ETAG_UPLOAD configuration parameter.

      • Checking the ETag HTTP response header on a HEAD request done for the uploaded file. This is the second preferred mode, and can be opted out of via the BUCKET_STORAGE_IGNORE_ETAG_AFTER_UPLOAD configuration parameter.

      • Downloading the file that was uploaded, in order to validate the checksum file. This mode is enabled if neither of the other modes are enabled.

      Previous validation modes that did not compare checksums have been removed, as they were not reliable indicators of the uploaded file integrity.

    • The size of the queue for segments being uploaded to bucket storage has been increased. This reduces how often a scan global for changes is needed.

      For more information, see Bucket Storage.

    • For better efficiency, more than one object is now deleted from Bucket Storage per request to S3 in order to reduce the number of requests to S3.

    • Support is implemented for returning a result over 1GB in size on the queryjobs endpoint. There is now a limit on the size of 8GB of the returned result. The limits on state sizes for queries remain unaltered, so the effect of this change is that some queries that previously failed returning their results due to reaching 1GB, even though the query completed, now work.

  • API

  • Configuration

    • A new dynamic configuration variable GraphQlDirectivesAmountLimit has been added to restrict how many GraphQL directives can be in a query. Valid values are integers from 5 to 1,000. The default value is 25.

    • The QueryBacktrackingLimit feature is now enabled by default. The default value for the max number of backtracks (number of times a single event can be processed) a query can do has been reduced to 2,000.

    • Adjusted launcher script handling of the CORES environment variable:

      If CORES is set, the launcher will now pass -XX:ActiveProcessorCount=$CORES to the JVM. If CORES is not set, the launcher will pass -XX:ActiveProcessorCount to the JVM with a value determined by the launcher. This ensures that the core count configured for LogScale is always same as the core count configured for internal JVM thread pools.

      -XX:ActiveProcessorCount will be ignored if passed directly via other environment variables, such as HUMIO_OPTS. Administrators currently configuring their clusters this way should remove -XX:ActiveProcessorCount from their variables and set CORES instead.

    • The default retention.bytes has been modified for global topic from 1 GB to 20 GB. This is applied only when the topic is being created by LogScale initially. For existing clusters you should raise retention on the global topic so that it has room for at least a few hours of flow. This is only relevant for large clusters, as small clusters do not produce enough to exceed 1 GB per few hours. It is ideal to have room for at least 1 day in the global topic for better resilience against large spikes in traffic combined with losing global snapshot files.

    • Cluster-wide configuration of S3 Archiving is introduced, in addition to the existing repo-specific configurations. This feature allows the cluster admin to setup archiving to a (single) bucket for a subset of repositories on the cluster, fully independent of the S3 Archiving available to end users via the UI. This feature adds the following new configuration parameters:

      • S3_CLUSTERWIDE_ARCHIVING_ACCESSKEY (required)

      • S3_CLUSTERWIDE_ARCHIVING_SECRETKEY (required)

      • S3_CLUSTERWIDE_ARCHIVING_REGION (required)

      • S3_CLUSTERWIDE_ARCHIVING_BUCKET (required)

      • S3_CLUSTERWIDE_ARCHIVING_PREFIX (defaults to empty string)

      • S3_CLUSTERWIDE_ARCHIVING_PATH_STYLE_ACCESS(default is false)

      • S3_CLUSTERWIDE_ARCHIVING_KMS_KEY_ARN

      • S3_CLUSTERWIDE_ARCHIVING_ENDPOINT_BASE

      • S3_CLUSTERWIDE_ARCHIVING_WORKERCOUNT (default is cores/4)

      • S3_CLUSTERWIDE_ARCHIVING_USE_HTTP_PROXY (default is false)

      • S3_CLUSTERWIDE_ARCHIVING_IBM_COMPAT (default is false)

      Most of these configuration variables work like they do for S3 Archiving, except that the region/bucket is selected here via configuration, and not dynamically by the end users, and also that the authentication is via explicit accesskey and secret, and not via IAM roles or any other means.

      The following dynamic configurations are added for this feature:

      • S3ArchivingClusterWideDisabled (defaults to false when not set) — allows temporarily pausing the archiving in case of issues triggered by, for example, the traffic this creates.

      • S3ArchivingClusterWideEndAt and S3ArchivingClusterWideStartFrom — timestamps in milliseconds of the "cut" that selects segment files and events in them to include. When these configuration variables are unset (which is the default) the effect is to not filter by time.

      • S3ArchivingClusterWideRegexForRepoName (defaults to not match if not set) — the repository name regex must be set in order to enable the feature. When set, all repositories that have a name that matches the regex (unanchored) will be archived using the cluster wide configuration from this variable.

  • Ingestion

    • On the Code page accessible from the Parsers menu when writing a new parser, the following validation rules have been added globally:

      • Arrays must be contiguous and must have a field with index 0. For instance, myArray[0] := "some value"

      • Fields that are prefixed with # must be configured to be tagged (to avoid falsely tagged fields).

      An error is displayed on the parser Code page if the rules above are violated. This error will not appear during actual parsing.

      For more information, see Creating a New Parser.

    • To avoid exporting redundant fields in the parsers, LogScale will now omit YAML fields with a null value when exporting YAML templates — even when such fields are contained inside a list. Omitting fields with a null value previously only happened for fields outside a list.

  • Log Collector

    • RemoteUpdate version dialog has been improved, with the ability to cancel pending and scheduled updates.

  • Functions

    • Matching on multiple rows with the match() query function is now supported. This functionality allows match() to emit multiple events, one for each matching row. The nrows parameter is used to specify the maximum number of rows to match on.

      For more information, see match().

    • The match() function now supports matching on multiple pairs of fields and columns.

      For more information, see match().

    • The new query function text:contains() is introduced. The function tests if a specific substring is present within a given string. It takes two arguments: string and substring, both of which can be provided as plain text, field values, or results of an expression.

      For more information, see text:contains().

    • The new query function array:append() is introduced, used to append one or more values to an existing array, or to create a new array.

      For more information, see array:append().

Fixed in this release

  • Falcon Data Replicator

    • Testing new FDR feeds using s3 aliasing would fail for valid credentials. This issue has now been fixed.

  • UI Changes

    • The Query Monitor page would show queries running on @ingesttimestamp as running on a search interval over all time. This wrong behavior has been fixed to show the correct search interval.

    • The event histogram would not adhere to the timezone selected for the query.

    • When managing sessions within an organization, it was not possible to sort active sessions by the Last active timestamp column. This issue has now been fixed.

    • In the Export to File dialog, when using the keyboard to switch between options, a different item than the one selected was highlighted. This issue has now been fixed.

    • A long list of large queries would break the queries' list appearing under the Recent tab by not being updatable. The limit to recent queries has now been set to 30.

      For more information, see Recalling Queries.

    • A race condition in LogScale Multi-Cluster Search has been fixed: a done query with an incomplete result could be overwritten, causing the query to never complete.

    • The dialog to quickly switch to another repository would open when pressing the undo hotkey on Windows machines. This wrong behavior has now been fixed.

    • The dropdown menu for selecting fields used when exporting data to a CSV file was hidden behind the Export to file dialog. This issue has now been fixed.

    • The Organizations overview page has been fixed as the Volume column width within a specific organization could not be adjusted.

    • The display of Lookup Files metadata in the file editor for very long user names has now been fixed.

    • The settings used to disable automatic searching would not be respected when creating a new alert. This issue has now been fixed.

    • When Creating a File, saving an invalid .csv file was possible in the file editor. This wrong behavior has now been fixed.

    • The Export to file dialog used when Exporting Data has been fixed as CSV fields input would in some cases not be populated with all fields.

    • Fixing a visualization issue where the values in a multi-select combo box could overlap with the number of selected items.

    • When clicking to sort the Sessions based on Last active, the sorting was wrongly based on Login time instead. This issue has now been fixed.

    • It was not possible to sort by columns other than ID in the Cluster nodes table under the Operations UI menu. This issue has now been fixed.

  • Automation and Alerts

    • Actions would show up as scheduled searches and vice versa when viewing the contents of a package. This issue has now been fixed.

    • Fixed an issue where queries that were failing would never complete. This could cause Alerts and Scheduled Searches to hang.

    • Scheduled Searches would not always log if runs were skipped due to being behind. This issue has been fixed now.

    • The read-only alert page would wrongly report that actions were being throttled when a filter alert had disabled throttling. This issue has now been fixed.

  • GraphQL API

    • The getFileContent() GraphQL endpoint will now return an UploadedFileSnapshot! datatype with the field totalLinesCount: 0 when a file has no matches for a given filter string. Previously it would return the total number of lines in the file.

    • The background processing underlying the redactEvents() mutation would fail if the filter included tags. This error has now been fixed.

  • Storage

    • Throttling for bucket uploads/downloads has been fixed as it could cause unintentionally high number of concurrent uploads or downloads, to the point of exceeding the pool of connections.

    • Notifying to Global Database about file changes could be slow. This issue has now been fixed.

    • Segments could be considered under-replicated for a long time leading to events being retained in Kafka for extended periods. This wrong behavior has now been fixed.

    • Throttling for bucket uploads/downloads could cause unintentionally harsh throttling of downloads in favor of running more uploads concurrently. This issue has now been fixed.

    • Digest threads could fail to start digesting if global is very large, and if writing to global is slow. This issue has now been fixed.

    • The throttling for segment rebalancing has been reworked, which should help rebalancing keep up without overwhelming the cluster.

  • API

  • Configuration

    • Make a value of 1 for BucketStorageUploadInfrequentThresholdDays dynamic configuration result in all uploads to bucket being subject to "S3 Intelligent-Tiering". Some installs want this as they apply versioning to their bucket, so even though the life span as a non-deleted object is short, the actual data remains for much longer in the bucket, and then tiering all objects saves on cost of storage for them. Objects below 128KB are never tiered in any case.

  • Dashboards and Widgets

    • Arguments for parameters no longer used in a deleted query could be submitted anyway when invoking a saved query that uses the same arguments, thus generating an error. This issue has now been fixed.

    • The Table widget has been fixed due to its header appearing transparent.

  • Ingestion

    • Event Forwarding would fail silently if an error occurred while executing the query. This issue has now been fixed.

    • A queryToRead field has been added to the filesUsed property of queryResult to read the data from a file used in a query.

      For more information, see Polling a Query Job.

    • Event Forwarding using match() or lookup() with a missing file would continue to fail after the file was uploaded. This issue has now been fixed.

    • When shutting down a node, the process that load files used by a parser would be stopped before the parser itself. This could lead to ingested events not being parsed. This issue has now been fixed.

    • A wrong order of the output events for parsers have been fixed — the output now returns the correct event order.

  • Log Collector

    • Queries that were nested too deeply would crash LogScale nodes. This issue has now been fixed.

  • Functions

    • parseXml() would sometimes only partially extract text elements when the text contained newline characters. This issue has now been fixed.

    • Parsing the empty string as a number could lead to errors causing the query to fail (in formatTime() function, for example). This issue has now been fixed.

    • The query backtracking limit would wrongly apply to the total number of events, rather than how many times individual events are passed through the query pipeline. This issue has now been fixed.

    • Long running queries using window() could end up never completing. This issue has now been fixed.

    • writeJson() would write invalid JSON by not correctly quoting numbers starting with unary plus or ending with a trailing . (dot).

    • A regression issue has been fixed in the match() function in cidr mode, which was significantly slower when doing submission of the query.

Known Issues

  • Queries

    • A known issue in the implementation of the match() function when using cidr option in the mode parameter, could cause a reduction in performance for the query, and block other queries from executing.

Improvement

  • UI Changes

    • The performance of the query editor has been improved, especially when working with large query results.

  • Automation and Alerts

    • The log field previouslyPlannedForExecutionAt has been renamed to earliestSkippedPlannedExecution when skipping scheduled search executions.

    • The field useProxyOption has been added to Webhooks action templates to be consistent with the other action templates.

    • The severity of a number of alert and scheduled search logs has been changed to better reflect the severity for users.

  • Storage

    • The global topic throughput has been improved for particular updates to segments in datasources with many segments.

      For more information, see Global Database.

    • Let segment merge span vary by +/- 10% of the configured value to avoid all segment targets switching to a new merge targets at the same point in time.

  • Ingestion

    • The input validation on Split by AWS records preprocessing when Set up a New Ingest Feed has been simplified: it will still validate that the incoming file is a single JSON object (and not, for example, multiple newline-delimited JSON objects), but the object may or may not contain a Records array. This resolves an ingest feed issue for CloudTrail with log file integrity enabled. In such cases, the emitted digest files (that does not have the Records array) would halt the ingest feed. These digest files are now ignored.

      For more background information, see this related release note.

    • The Split by AWS records preprocessing when Set up a New Ingest Feed now requires the Records array. This better protects against a situation where mistakenly using this preprocessing step with non-AWS records would interpret the files as empty batches of events, leading notifications in SQS to be deleted without ingesting any events.

  • Queries

Falcon LogScale 1.153.2 Internal (2024-09-18)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.153.2Internal2024-09-18

Internal Only

2025-09-30No1.112No

Available for download two days after release.

Internal-only release.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The server.tar.gz release artifact has been deprecated. Users should switch to the OS/architecture-specific server-linux_x64.tar.gz or server-alpine_x64.tar.gz, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.

    We are making this change for the following reasons:

    • By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.

    • Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.

    • By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.

    The last release where server.tar.gz artifact is included will be 1.154.0.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Functions

    • Prior to LogScale v1.147, the array:length() function accepted a value in the array argument that did not contain brackets [ ] so that array:length("field") would always produce the result 0 (since there was no field named field). The function has now been updated to properly throw an exception if given a non-array field name in the array argument. Therefore, the function now requires the given array name to have [ ] brackets, since it only works on array fields.

Known Issues

  • Queries

    • A known issue in the implementation of the match() function when using cidr option in the mode parameter, could cause a reduction in performance for the query, and block other queries from executing.

Falcon LogScale 1.153.1 LTS (2024-09-18)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.153.1LTS2024-09-18

Cloud

2025-09-30No1.112No

Hide file hashes

Show file hashes

Download

Bug fixes and updates.

Breaking Changes

The following items create a breaking change in the behavior, response or operation of this release.

  • Functions

    • Calling the match() function with multiple columns now finds the last matching row in the file. This now aligns with the behavior of calling the same function with a single column.

      For more information, see match().

Removed

Items that have been removed as of this release.

Installation and Deployment

  • The previously deprecated jar distribution of LogScale (e.g. server-1.117.jar) is no longer published starting from this version. For more information, see Falcon LogScale 1.130.0 GA (2024-03-19).

  • The previously deprecated humio/kafka and humio/zookeeper Docker images are now removed and no longer published.

API

  • The following previously deprecated KAFKA API endpoints have been removed:

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment

    • GET /api/v1/clusterconfig/kafka-queues/partition-assignment

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults

    • GET /api/v1/clusterconfig/kafka-queues/partition-assignment/id

Configuration

Other

  • Unnecessary digest-coordinator-changes and desired-digest-coordinator-changes metrics have been removed. Instead, the logging in the IngestPartitionCoordinator class has been improved, to allow monitoring of when reassignment of desired and current digesters happens — by searching for Wrote changes to desired digest partitions / Wrote changes to current digest partitions.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The server.tar.gz release artifact has been deprecated. Users should switch to the OS/architecture-specific server-linux_x64.tar.gz or server-alpine_x64.tar.gz, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.

    We are making this change for the following reasons:

    • By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.

    • Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.

    • By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.

    The last release where server.tar.gz artifact is included will be 1.154.0.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Installation and Deployment

    • The default cleanup.policy for the transientChatter-events topic has been switched from compact to delete,compact. This change will not apply to existing clusters. Changing this setting to delete,compact via Kafka's command line tools is particularly recommended if transientChatter is taking up excessive space on disk, whereas it is less relevant in production environments where Kafka's disks tend to be large.

  • Automation and Alerts

    • Aggregate and filter alert types now both display an Error (red) status if starting the alert query times out after 1 minute.

      For more information on alert statuses, see Monitoring Alerts.

  • Storage

    • Reduced the waiting time for redactEvents background jobs to complete.

      The background job will not complete until all mini-segments affected by the redaction have been merged into full segments. The job was pessimistically waiting for MAX_HOURS_SEGMENT_OPEN (30 days) before attempting the rewrite. This has been changed to wait for FLUSH_BLOCK_SECONDS (15 minutes) before attempting the rewrite, this means, while some mini-segments may not be rewritten for 30 days, it is uncommon. If a rewrite is attempted and encounters mini-segments, it is postponed and retried later.

      For more information, see Redact Events API.

  • Configuration

    • When global publish to Kafka times out from digester threads, the system would initiate a failure shutdown. Instead, from this 1.144 version the system retries the publish to Global Database indefinitely for those specific global transactions that originate in a digester thread. If retries occur, these get logged with an error executeTransactionRetryingOnTimeout: unable to execute transaction for global, retrying.

    • Autoshards no longer respond to ingest delay by default, and now support round-robin instead.

  • Functions

    • Prior to LogScale v1.147, the array:length() function accepted a value in the array argument that did not contain brackets [ ] so that array:length("field") would always produce the result 0 (since there was no field named field). The function has now been updated to properly throw an exception if given a non-array field name in the array argument. Therefore, the function now requires the given array name to have [ ] brackets, since it only works on array fields.

Upgrades

Changes that may occur or be required during an upgrade.

  • Installation and Deployment

    • The minimum version of Java compatible with LogScale is now 21. Docker users, and users installing the release artifacts that bundle the JDK, are not affected.

      It is recommended to switch to the release artifacts that bundle a JDK, because LogScale no longer supports bringing your own JDK as of release 1.138, see Falcon LogScale 1.138.0 GA (2024-05-14)

New features and improvements

  • Security

    • When extending Retention span or size, any segments that were marked for deletion — but where the files remain in the system — are automatically resurrected. How much data you reclaim via this depends on the backupAfterMillis configuration on the repository.

      For more information, see Audit Logging.

  • Installation and Deployment

    • The Docker containers have been configured to use the following environment variable values internally: DIRECTORY=/data/humio-data HUMIO_AUDITLOG_DIR=/data/logs HUMIO_DEBUGLOG_DIR=/data/logs JVM_LOG_DIR=/data/logs JVM_TMP_DIR=/data/humio-data/jvm-tmp This configuration replaces the following chains of internal symlinks, which have been removed:

      • /app/humio/humio/humio-data to /app/humio/humio-data

      • /app/humio/humio-data to /data/humio-data

      • /app/humio/humio/logs /app/humio/logs

      • /app/humio/logs to /data/logs

      This change is intended for allowing the tool scripts in /app/humio/humio/bin to work correctly, as they were previously failing due to the presence of dangling symlinks when invoked via docker run if nothing was mounted at /data.

  • UI Changes

    • LogScale administrators can now set the default timezone for their users.

      For more information, see Setting Time Zone.

    • When exporting data to CSV, the Export to File dialog now offers the ability to select field names that are suggested based on the query results, or to select all fields in one click.

      For more information, see Exporting Data.

    • The Time Interval panel now displays the @ingesttimestamp/@timestamp options selected when querying events for Aggregate Alerts.

      For more information, see Changing Time Interval.

    • A new timestamp column has been added in the Event list displaying the alert timestamp selected (@ingesttimestamp or @timestamp). This will show as the new default column along with the usual @rawstring field column.

      For more information, see Alert Properties.

    • When a file is referenced in a query, the Search page now shows a new tab next to the Results and Events tabs, bearing the name of the uploaded file. Activating the file tab will fetch the contents of the file and will show them as a Table widget. Alternatively, if the file cannot be queried, a download link will be presented instead.

      For more information, see Creating a File.

    • Sections can now be created inside dashboards, allowing for grouping relevant content together to maintain a clean and organized layout, making it easier for users to find and analyze related information. Sections can contain data visualizations as well as Parameter Panels. Additionally, they offer more flexibility when using the Time Selector, enabling users to apply a time setting across multiple widgets.

      For more information, see Sections.

    • The Users page has been redesigned so that the Repository and view roles are displayed in a right hand side panel which opens when a repository or view is selected. The repository and views roles panel shows the roles that give permissions to the user for the selected repository or view, together with groups that apply to them and the corresponding query prefixes.

      For more information, see Manage Users.

    • An organization administrator can now update a user's role on a repository or view from the Users page.

      For more information, see Manage User Roles.

    • The design of the file editor for Lookup Files has been improved. The editor is now also more responsive and has support for tab navigation.

    • The Client type item in the Query details tab has been removed. Previously, Dashboard was incorrectly displayed as the value for both live dashboard and alert query types.

      For more information, see Query Monitor — Query Details.

    • In Organization settings, layout changes have been made to the Groups page for viewing and updating repository and view permissions on a group.

    • UI workflow updates have been made in the Groups page for managing permissions and roles.

      For more information, see Manage Groups.

  • Automation and Alerts

    • A maximum limit of 1 week has been added on the throttle period for Filter Alerts and Standard Alerts. Any existing alert with a higher throttle time will continue to run, but when edited, lowering the throttle time to 1 week at most will be required.

    • Standard Alerts have been renamed to Legacy Alerts. It is recommended using Filter Alerts or Aggregate Alerts alerts instead of legacy alerts.

      For more information, see Alerts.

    • The {action_invocation_id} message template has been added: it contains a unique id for the invocation of the action that can be correlated with the activity logs.

      For more information, see Message Templates and Variables, Monitoring Alert Execution through the humio-activity Repository.

    • It is no longer possible to use @id as throttle field in filter alerts, as this has no effect. Any existing filter alerts with @id as throttle field will continue to run, but the next time the filter alert is updated, the throttle field must be changed or removed.

      For more information, see Field-Based Throttling.

    • Audit logs for Alerts and Scheduled Searches now contain the package, if installed from a package.

    • The following UI changes have been introduced for alerts:

      • The Alerts overview page now presents a table with search and filtering options.

      • An alert-specific version of the Search page is now available for creating and refining your query before saving it as an alert.

      • The alert's properties are opened in a side panel when creating or editing an alert.

      • In the side panel, the recommended alert type to choose is suggested based on the query.

      • For aggregate alerts, the side panel allows you to select the timestamp (@ingesttimestamp or @timestamp).

      For more information, see Creating Alerts, Alert Properties.

    • A new Disabled actions status is added and can be visible from the Alerts overview table. This status will be displayed when there is an alert (or scheduled search) with only disabled actions attached.

      For more information, see Alerts Overview.

    • Audit logs for Filter Alerts now contain the language version of the alert query.

    • A new aggregate alert type is introduced. The aggregate alert is now the recommended alert type for any queries containing aggregate functions. Like filter alerts, aggregate alerts use ingest timestamps and run back-to-back searches, guaranteeing at least once delivery to the actions for more robust results, even in case of ingest delays of up to 24 hours.

      For more information, see Aggregate Alerts.

    • The following adjustments have been made for Scheduled PDF Reports:

      • If the feature is disabled for the cluster, then the Scheduled reports menu item under Automation will not show.

      • If the feature is disabled or the render service is in an error state, users who are granted with the ChangeScheduledReport permission and try to access, will be presented with a banner on the Scheduled reports overview page.

      • The permissions overview in the UI now informs that the feature must be enabled and configured correctly for the cluster, in order for the ChangeScheduledReport permission to have any effect.

    • Users can now see warnings and errors associated to alerts in the Alerts page opened in read-only mode.

  • GraphQL API

    • The new environmentVariableUsage() GraphQL API has been introduced for listing non-secret environment variables used by a node. This is intended as an aid to help do configuration discovery when managing a large number of LogScale clusters.

    • The getFileContent() and newFile() GraphQL endpoint responses will change for empty files. The return type is still UploadedFileSnapshot!, but the lines field will be changed to return [] when the file is empty. Previously, the return value was a list containing an empty list [[]]. This change applies both for empty files, and when the provided filter string doesn't match any rows in the file.

    • The log line containing Executed GraphQL query in the humio repository, that is logged for every GraphQL call, now contains the name of the mutations and queries that are executed.

    • The new concatenateQueries() GraphQL API has been introduced for programmatically concatenating multiple queries into one. This is intended to eliminate errors that might occur if queries are combined naively.

    • The preview tag has been removed from the following GraphQL mutations:

    • The stopStreamingQueries() GraphQL mutation is no longer in preview.

    • The getFileContent()GraphQL query will now filter CSV file rows case insensitively and allow partial text matches. This happens when filterString input argument is provided. This makes it possible to search for rows without knowing the full column values, and while ignoring the case.

    • The defaultTimeZone GraphQL field on the UserSettings GraphQL type no longer defaults to the organisation default time zone if the user has no default time zone set. To get the default organization time zone through the API, use the defaultTimeZone field on the OrganizationConfigs GraphQL type.

    • The new startFromDateTime argument has been added to s3ConfigureArchiving GraphQL mutation. When set, S3Archiving does not consider segment files that have a start time that is before this point in time. This in particular allows enabling S3 archiving only from a point in time and going forward, without archiving all the older files too.

    • A new field named searchUsers has been added on the group() output type in graphql, which is used to search users in the group. The field also allows for pagination, ordering and sorting of the result set.

  • Storage

    • An alternative S3 client is now available and enabled by default. It handles file uploads more efficiently, by setting the Content-MD5 header during upload thus allowing S3 to perform file validation instead of having LogScale do it via post-upload validation steps. This form of validation should work for all uploads, including when server-side encryption is enabled. The new S3 client only supports this validation mode, so setting the following variables will have no effect:

      In case of issues, the S3 client can be disabled by setting USE_AWS_SDK=false, which will set LogScale back to the previous default client. Should you need to do this, please reach out to Support to have the issue addressed, because the previous client will be deprecated and removed eventually.

    • Support for bucket storage upload validation has changed. LogScale now supports the following three validation modes:

      • Checking the ETag HTTP response header on the upload response. This mode is the default, and can be opted out of via the BUCKET_STORAGE_IGNORE_ETAG_UPLOAD configuration parameter.

      • Checking the ETag HTTP response header on a HEAD request done for the uploaded file. This is the second preferred mode, and can be opted out of via the BUCKET_STORAGE_IGNORE_ETAG_AFTER_UPLOAD configuration parameter.

      • Downloading the file that was uploaded, in order to validate the checksum file. This mode is enabled if neither of the other modes are enabled.

      Previous validation modes that did not compare checksums have been removed, as they were not reliable indicators of the uploaded file integrity.

    • The size of the queue for segments being uploaded to bucket storage has been increased. This reduces how often a scan global for changes is needed.

      For more information, see Bucket Storage.

    • For better efficiency, more than one object is now deleted from Bucket Storage per request to S3 in order to reduce the number of requests to S3.

    • Support is implemented for returning a result over 1GB in size on the queryjobs endpoint. There is now a limit on the size of 8GB of the returned result. The limits on state sizes for queries remain unaltered, so the effect of this change is that some queries that previously failed returning their results due to reaching 1GB, even though the query completed, now work.

  • API

  • Configuration

    • A new dynamic configuration variable GraphQlDirectivesAmountLimit has been added to restrict how many GraphQL directives can be in a query. Valid values are integers from 5 to 1,000. The default value is 25.

    • The QueryBacktrackingLimit feature is now enabled by default. The default value for the max number of backtracks (number of times a single event can be processed) a query can do has been reduced to 2,000.

    • Adjusted launcher script handling of the CORES environment variable:

      If CORES is set, the launcher will now pass -XX:ActiveProcessorCount=$CORES to the JVM. If CORES is not set, the launcher will pass -XX:ActiveProcessorCount to the JVM with a value determined by the launcher. This ensures that the core count configured for LogScale is always same as the core count configured for internal JVM thread pools.

      -XX:ActiveProcessorCount will be ignored if passed directly via other environment variables, such as HUMIO_OPTS. Administrators currently configuring their clusters this way should remove -XX:ActiveProcessorCount from their variables and set CORES instead.

    • The default retention.bytes has been modified for global topic from 1 GB to 20 GB. This is applied only when the topic is being created by LogScale initially. For existing clusters you should raise retention on the global topic so that it has room for at least a few hours of flow. This is only relevant for large clusters, as small clusters do not produce enough to exceed 1 GB per few hours. It is ideal to have room for at least 1 day in the global topic for better resilience against large spikes in traffic combined with losing global snapshot files.

    • Cluster-wide configuration of S3 Archiving is introduced, in addition to the existing repo-specific configurations. This feature allows the cluster admin to setup archiving to a (single) bucket for a subset of repositories on the cluster, fully independent of the S3 Archiving available to end users via the UI. This feature adds the following new configuration parameters:

      • S3_CLUSTERWIDE_ARCHIVING_ACCESSKEY (required)

      • S3_CLUSTERWIDE_ARCHIVING_SECRETKEY (required)

      • S3_CLUSTERWIDE_ARCHIVING_REGION (required)

      • S3_CLUSTERWIDE_ARCHIVING_BUCKET (required)

      • S3_CLUSTERWIDE_ARCHIVING_PREFIX (defaults to empty string)

      • S3_CLUSTERWIDE_ARCHIVING_PATH_STYLE_ACCESS(default is false)

      • S3_CLUSTERWIDE_ARCHIVING_KMS_KEY_ARN

      • S3_CLUSTERWIDE_ARCHIVING_ENDPOINT_BASE

      • S3_CLUSTERWIDE_ARCHIVING_WORKERCOUNT (default is cores/4)

      • S3_CLUSTERWIDE_ARCHIVING_USE_HTTP_PROXY (default is false)

      • S3_CLUSTERWIDE_ARCHIVING_IBM_COMPAT (default is false)

      Most of these configuration variables work like they do for S3 Archiving, except that the region/bucket is selected here via configuration, and not dynamically by the end users, and also that the authentication is via explicit accesskey and secret, and not via IAM roles or any other means.

      The following dynamic configurations are added for this feature:

      • S3ArchivingClusterWideDisabled (defaults to false when not set) — allows temporarily pausing the archiving in case of issues triggered by, for example, the traffic this creates.

      • S3ArchivingClusterWideEndAt and S3ArchivingClusterWideStartFrom — timestamps in milliseconds of the "cut" that selects segment files and events in them to include. When these configuration variables are unset (which is the default) the effect is to not filter by time.

      • S3ArchivingClusterWideRegexForRepoName (defaults to not match if not set) — the repository name regex must be set in order to enable the feature. When set, all repositories that have a name that matches the regex (unanchored) will be archived using the cluster wide configuration from this variable.

  • Ingestion

    • On the Code page accessible from the Parsers menu when writing a new parser, the following validation rules have been added globally:

      • Arrays must be contiguous and must have a field with index 0. For instance, myArray[0] := "some value"

      • Fields that are prefixed with # must be configured to be tagged (to avoid falsely tagged fields).

      An error is displayed on the parser Code page if the rules above are violated. This error will not appear during actual parsing.

      For more information, see Creating a New Parser.

    • To avoid exporting redundant fields in the parsers, LogScale will now omit YAML fields with a null value when exporting YAML templates — even when such fields are contained inside a list. Omitting fields with a null value previously only happened for fields outside a list.

  • Log Collector

    • RemoteUpdate version dialog has been improved, with the ability to cancel pending and scheduled updates.

  • Functions

    • Matching on multiple rows with the match() query function is now supported. This functionality allows match() to emit multiple events, one for each matching row. The nrows parameter is used to specify the maximum number of rows to match on.

      For more information, see match().

    • The match() function now supports matching on multiple pairs of fields and columns.

      For more information, see match().

    • The new query function text:contains() is introduced. The function tests if a specific substring is present within a given string. It takes two arguments: string and substring, both of which can be provided as plain text, field values, or results of an expression.

      For more information, see text:contains().

    • The new query function array:append() is introduced, used to append one or more values to an existing array, or to create a new array.

      For more information, see array:append().

Fixed in this release

  • Falcon Data Replicator

    • Testing new FDR feeds using s3 aliasing would fail for valid credentials. This issue has now been fixed.

  • UI Changes

    • The Query Monitor page would show queries running on @ingesttimestamp as running on a search interval over all time. This wrong behavior has been fixed to show the correct search interval.

    • The event histogram would not adhere to the timezone selected for the query.

    • When managing sessions within an organization, it was not possible to sort active sessions by the Last active timestamp column. This issue has now been fixed.

    • In the Export to File dialog, when using the keyboard to switch between options, a different item than the one selected was highlighted. This issue has now been fixed.

    • A long list of large queries would break the queries' list appearing under the Recent tab by not being updatable. The limit to recent queries has now been set to 30.

      For more information, see Recalling Queries.

    • A race condition in LogScale Multi-Cluster Search has been fixed: a done query with an incomplete result could be overwritten, causing the query to never complete.

    • The dialog to quickly switch to another repository would open when pressing the undo hotkey on Windows machines. This wrong behavior has now been fixed.

    • The dropdown menu for selecting fields used when exporting data to a CSV file was hidden behind the Export to file dialog. This issue has now been fixed.

    • The Organizations overview page has been fixed as the Volume column width within a specific organization could not be adjusted.

    • The display of Lookup Files metadata in the file editor for very long user names has now been fixed.

    • The settings used to disable automatic searching would not be respected when creating a new alert. This issue has now been fixed.

    • When Creating a File, saving an invalid .csv file was possible in the file editor. This wrong behavior has now been fixed.

    • The Export to file dialog used when Exporting Data has been fixed as CSV fields input would in some cases not be populated with all fields.

    • Fixing a visualization issue where the values in a multi-select combo box could overlap with the number of selected items.

    • When clicking to sort the Sessions based on Last active, the sorting was wrongly based on Login time instead. This issue has now been fixed.

    • It was not possible to sort by columns other than ID in the Cluster nodes table under the Operations UI menu. This issue has now been fixed.

  • Automation and Alerts

    • Actions would show up as scheduled searches and vice versa when viewing the contents of a package. This issue has now been fixed.

    • Fixed an issue where queries that were failing would never complete. This could cause Alerts and Scheduled Searches to hang.

    • Scheduled Searches would not always log if runs were skipped due to being behind. This issue has been fixed now.

    • The read-only alert page would wrongly report that actions were being throttled when a filter alert had disabled throttling. This issue has now been fixed.

  • GraphQL API

    • The getFileContent() GraphQL endpoint will now return an UploadedFileSnapshot! datatype with the field totalLinesCount: 0 when a file has no matches for a given filter string. Previously it would return the total number of lines in the file.

    • The background processing underlying the redactEvents() mutation would fail if the filter included tags. This error has now been fixed.

  • Storage

    • Throttling for bucket uploads/downloads has been fixed as it could cause unintentionally high number of concurrent uploads or downloads, to the point of exceeding the pool of connections.

    • Notifying to Global Database about file changes could be slow. This issue has now been fixed.

    • Segments could be considered under-replicated for a long time leading to events being retained in Kafka for extended periods. This wrong behavior has now been fixed.

    • Throttling for bucket uploads/downloads could cause unintentionally harsh throttling of downloads in favor of running more uploads concurrently. This issue has now been fixed.

    • Digest threads could fail to start digesting if global is very large, and if writing to global is slow. This issue has now been fixed.

    • The throttling for segment rebalancing has been reworked, which should help rebalancing keep up without overwhelming the cluster.

  • API

  • Configuration

    • Make a value of 1 for BucketStorageUploadInfrequentThresholdDays dynamic configuration result in all uploads to bucket being subject to "S3 Intelligent-Tiering". Some installs want this as they apply versioning to their bucket, so even though the life span as a non-deleted object is short, the actual data remains for much longer in the bucket, and then tiering all objects saves on cost of storage for them. Objects below 128KB are never tiered in any case.

  • Dashboards and Widgets

    • Arguments for parameters no longer used in a deleted query could be submitted anyway when invoking a saved query that uses the same arguments, thus generating an error. This issue has now been fixed.

    • The Table widget has been fixed due to its header appearing transparent.

  • Ingestion

    • Event Forwarding would fail silently if an error occurred while executing the query. This issue has now been fixed.

    • A queryToRead field has been added to the filesUsed property of queryResult to read the data from a file used in a query.

      For more information, see Polling a Query Job.

    • Event Forwarding using match() or lookup() with a missing file would continue to fail after the file was uploaded. This issue has now been fixed.

    • When shutting down a node, the process that load files used by a parser would be stopped before the parser itself. This could lead to ingested events not being parsed. This issue has now been fixed.

    • A wrong order of the output events for parsers have been fixed — the output now returns the correct event order.

  • Log Collector

    • Queries that were nested too deeply would crash LogScale nodes. This issue has now been fixed.

  • Functions

    • parseXml() would sometimes only partially extract text elements when the text contained newline characters. This issue has now been fixed.

    • Parsing the empty string as a number could lead to errors causing the query to fail (in formatTime() function, for example). This issue has now been fixed.

    • The query backtracking limit would wrongly apply to the total number of events, rather than how many times individual events are passed through the query pipeline. This issue has now been fixed.

    • Long running queries using window() could end up never completing. This issue has now been fixed.

    • writeJson() would write invalid JSON by not correctly quoting numbers starting with unary plus or ending with a trailing . (dot).

Known Issues

  • Queries

    • A known issue in the implementation of the match() function when using cidr option in the mode parameter, could cause a reduction in performance for the query, and block other queries from executing.

Improvement

  • UI Changes

    • The performance of the query editor has been improved, especially when working with large query results.

  • Automation and Alerts

    • The log field previouslyPlannedForExecutionAt has been renamed to earliestSkippedPlannedExecution when skipping scheduled search executions.

    • The field useProxyOption has been added to Webhooks action templates to be consistent with the other action templates.

    • The severity of a number of alert and scheduled search logs has been changed to better reflect the severity for users.

  • Storage

    • The global topic throughput has been improved for particular updates to segments in datasources with many segments.

      For more information, see Global Database.

    • Let segment merge span vary by +/- 10% of the configured value to avoid all segment targets switching to a new merge targets at the same point in time.

  • Ingestion

    • The input validation on Split by AWS records preprocessing when Set up a New Ingest Feed has been simplified: it will still validate that the incoming file is a single JSON object (and not, for example, multiple newline-delimited JSON objects), but the object may or may not contain a Records array. This resolves an ingest feed issue for CloudTrail with log file integrity enabled. In such cases, the emitted digest files (that does not have the Records array) would halt the ingest feed. These digest files are now ignored.

      For more background information, see this related release note.

    • The Split by AWS records preprocessing when Set up a New Ingest Feed now requires the Records array. This better protects against a situation where mistakenly using this preprocessing step with non-AWS records would interpret the files as empty batches of events, leading notifications in SQS to be deleted without ingesting any events.

  • Queries

Falcon LogScale 1.153.0 GA (2024-08-27)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.153.0GA2024-08-27

Cloud

2025-09-30No1.112No

Available for download two days after release.

Bug fixes and updates.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The server.tar.gz release artifact has been deprecated. Users should switch to the OS/architecture-specific server-linux_x64.tar.gz or server-alpine_x64.tar.gz, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.

    We are making this change for the following reasons:

    • By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.

    • Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.

    • By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.

    The last release where server.tar.gz artifact is included will be 1.154.0.

  • The HUMIO_JVM_ARGS environment variable in the LogScale Launcher Script script will be removed in 1.154.0.

    The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Automation and Alerts

    • Aggregate and filter alert types now both display an Error (red) status if starting the alert query times out after 1 minute.

      For more information on alert statuses, see Monitoring Alerts.

  • Configuration

    • Autoshards no longer respond to ingest delay by default, and now support round-robin instead.

  • Functions

    • Prior to LogScale v1.147, the array:length() function accepted a value in the array argument that did not contain brackets [ ] so that array:length("field") would always produce the result 0 (since there was no field named field). The function has now been updated to properly throw an exception if given a non-array field name in the array argument. Therefore, the function now requires the given array name to have [ ] brackets, since it only works on array fields.

New features and improvements

  • UI Changes

    • UI workflow updates have been made in the Groups page for managing permissions and roles.

      For more information, see Manage Groups.

  • Automation and Alerts

    • The following adjustments have been made for Scheduled PDF Reports:

      • If the feature is disabled for the cluster, then the Scheduled reports menu item under Automation will not show.

      • If the feature is disabled or the render service is in an error state, users who are granted with the ChangeScheduledReport permission and try to access, will be presented with a banner on the Scheduled reports overview page.

      • The permissions overview in the UI now informs that the feature must be enabled and configured correctly for the cluster, in order for the ChangeScheduledReport permission to have any effect.

  • GraphQL API

    • The getFileContent()GraphQL query will now filter CSV file rows case insensitively and allow partial text matches. This happens when filterString input argument is provided. This makes it possible to search for rows without knowing the full column values, and while ignoring the case.

    • The defaultTimeZone GraphQL field on the UserSettings GraphQL type no longer defaults to the organisation default time zone if the user has no default time zone set. To get the default organization time zone through the API, use the defaultTimeZone field on the OrganizationConfigs GraphQL type.

  • Storage

    • For better efficiency, more than one object is now deleted from Bucket Storage per request to S3 in order to reduce the number of requests to S3.

  • Configuration

    • Cluster-wide configuration of S3 Archiving is introduced, in addition to the existing repo-specific configurations. This feature allows the cluster admin to setup archiving to a (single) bucket for a subset of repositories on the cluster, fully independent of the S3 Archiving available to end users via the UI. This feature adds the following new configuration parameters:

      • S3_CLUSTERWIDE_ARCHIVING_ACCESSKEY (required)

      • S3_CLUSTERWIDE_ARCHIVING_SECRETKEY (required)

      • S3_CLUSTERWIDE_ARCHIVING_REGION (required)

      • S3_CLUSTERWIDE_ARCHIVING_BUCKET (required)

      • S3_CLUSTERWIDE_ARCHIVING_PREFIX (defaults to empty string)

      • S3_CLUSTERWIDE_ARCHIVING_PATH_STYLE_ACCESS(default is false)

      • S3_CLUSTERWIDE_ARCHIVING_KMS_KEY_ARN

      • S3_CLUSTERWIDE_ARCHIVING_ENDPOINT_BASE

      • S3_CLUSTERWIDE_ARCHIVING_WORKERCOUNT (default is cores/4)

      • S3_CLUSTERWIDE_ARCHIVING_USE_HTTP_PROXY (default is false)

      • S3_CLUSTERWIDE_ARCHIVING_IBM_COMPAT (default is false)

      Most of these configuration variables work like they do for S3 Archiving, except that the region/bucket is selected here via configuration, and not dynamically by the end users, and also that the authentication is via explicit accesskey and secret, and not via IAM roles or any other means.

      The following dynamic configurations are added for this feature:

      • S3ArchivingClusterWideDisabled (defaults to false when not set) — allows temporarily pausing the archiving in case of issues triggered by, for example, the traffic this creates.

      • S3ArchivingClusterWideEndAt and S3ArchivingClusterWideStartFrom — timestamps in milliseconds of the "cut" that selects segment files and events in them to include. When these configuration variables are unset (which is the default) the effect is to not filter by time.

      • S3ArchivingClusterWideRegexForRepoName (defaults to not match if not set) — the repository name regex must be set in order to enable the feature. When set, all repositories that have a name that matches the regex (unanchored) will be archived using the cluster wide configuration from this variable.

  • Ingestion

    • On the Code page accessible from the Parsers menu when writing a new parser, the following validation rules have been added globally:

      • Arrays must be contiguous and must have a field with index 0. For instance, myArray[0] := "some value"

      • Fields that are prefixed with # must be configured to be tagged (to avoid falsely tagged fields).

      An error is displayed on the parser Code page if the rules above are violated. This error will not appear during actual parsing.

      For more information, see Creating a New Parser.

Fixed in this release

  • UI Changes

    • A race condition in LogScale Multi-Cluster Search has been fixed: a done query with an incomplete result could be overwritten, causing the query to never complete.

    • The Export to file dialog used when Exporting Data has been fixed as CSV fields input would in some cases not be populated with all fields.

  • Storage

    • Throttling for bucket uploads/downloads has been fixed as it could cause unintentionally high number of concurrent uploads or downloads, to the point of exceeding the pool of connections.

    • Segments could be considered under-replicated for a long time leading to events being retained in Kafka for extended periods. This wrong behavior has now been fixed.

  • Functions

    • The query backtracking limit would wrongly apply to the total number of events, rather than how many times individual events are passed through the query pipeline. This issue has now been fixed.

Known Issues

  • Queries

    • A known issue in the implementation of the match() function when using cidr option in the mode parameter, could cause a reduction in performance for the query, and block other queries from executing.

Improvement

  • UI Changes

    • The performance of the query editor has been improved, especially when working with large query results.

  • Ingestion

    • The input validation on Split by AWS records preprocessing when Set up a New Ingest Feed has been simplified: it will still validate that the incoming file is a single JSON object (and not, for example, multiple newline-delimited JSON objects), but the object may or may not contain a Records array. This resolves an ingest feed issue for CloudTrail with log file integrity enabled. In such cases, the emitted digest files (that does not have the Records array) would halt the ingest feed. These digest files are now ignored.

      For more background information, see this related release note.

Falcon LogScale 1.152.0 GA (2024-08-20)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.152.0GA2024-08-20

Cloud

2025-09-30No1.112No

Available for download two days after release.

Bug fixes and updates.

Removed

Items that have been removed as of this release.

Configuration

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The server.tar.gz release artifact has been deprecated. Users should switch to the OS/architecture-specific server-linux_x64.tar.gz or server-alpine_x64.tar.gz, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.

    We are making this change for the following reasons:

    • By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.

    • Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.

    • By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.

    The last release where server.tar.gz artifact is included will be 1.154.0.

  • The HUMIO_JVM_ARGS environment variable in the LogScale Launcher Script script will be removed in 1.154.0.

    The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Functions

    • Prior to LogScale v1.147, the array:length() function accepted a value in the array argument that did not contain brackets [ ] so that array:length("field") would always produce the result 0 (since there was no field named field). The function has now been updated to properly throw an exception if given a non-array field name in the array argument. Therefore, the function now requires the given array name to have [ ] brackets, since it only works on array fields.

New features and improvements

  • UI Changes

    • In Organization settings, layout changes have been made to the Groups page for viewing and updating repository and view permissions on a group.

  • GraphQL API

  • Configuration

    • The default retention.bytes has been modified for global topic from 1 GB to 20 GB. This is applied only when the topic is being created by LogScale initially. For existing clusters you should raise retention on the global topic so that it has room for at least a few hours of flow. This is only relevant for large clusters, as small clusters do not produce enough to exceed 1 GB per few hours. It is ideal to have room for at least 1 day in the global topic for better resilience against large spikes in traffic combined with losing global snapshot files.

Fixed in this release

  • UI Changes

    • The Query Monitor page would show queries running on @ingesttimestamp as running on a search interval over all time. This wrong behavior has been fixed to show the correct search interval.

  • Automation and Alerts

    • Fixed an issue where queries that were failing would never complete. This could cause Alerts and Scheduled Searches to hang.

    • Scheduled Searches would not always log if runs were skipped due to being behind. This issue has been fixed now.

  • Dashboards and Widgets

    • The Table widget has been fixed due to its header appearing transparent.

Known Issues

  • Queries

    • A known issue in the implementation of the match() function when using cidr option in the mode parameter, could cause a reduction in performance for the query, and block other queries from executing.

Improvement

  • Automation and Alerts

    • The log field previouslyPlannedForExecutionAt has been renamed to earliestSkippedPlannedExecution when skipping scheduled search executions.

    • The field useProxyOption has been added to Webhooks action templates to be consistent with the other action templates.

    • The severity of a number of alert and scheduled search logs has been changed to better reflect the severity for users.

  • Ingestion

    • The Split by AWS records preprocessing when Set up a New Ingest Feed now requires the Records array. This better protects against a situation where mistakenly using this preprocessing step with non-AWS records would interpret the files as empty batches of events, leading notifications in SQS to be deleted without ingesting any events.

Falcon LogScale 1.151.1 GA (2024-08-15)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.151.1GA2024-08-15

Cloud

2025-09-30No1.112No

Available for download two days after release.

Bug fixes recommended for all customers.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The server.tar.gz release artifact has been deprecated. Users should switch to the OS/architecture-specific server-linux_x64.tar.gz or server-alpine_x64.tar.gz, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.

    We are making this change for the following reasons:

    • By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.

    • Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.

    • By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.

    The last release where server.tar.gz artifact is included will be 1.154.0.

  • The HUMIO_JVM_ARGS environment variable in the LogScale Launcher Script script will be removed in 1.154.0.

    The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Functions

    • Prior to LogScale v1.147, the array:length() function accepted a value in the array argument that did not contain brackets [ ] so that array:length("field") would always produce the result 0 (since there was no field named field). The function has now been updated to properly throw an exception if given a non-array field name in the array argument. Therefore, the function now requires the given array name to have [ ] brackets, since it only works on array fields.

Fixed in this release

  • Ingestion

    • Fixed an issue where queries with a large number of OR statements would crash the parser and cause a node to fail.

Known Issues

  • Queries

    • A known issue in the implementation of the match() function when using cidr option in the mode parameter, could cause a reduction in performance for the query, and block other queries from executing.

Falcon LogScale 1.151.0 GA (2024-08-13)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.151.0GA2024-08-13

Cloud

2025-09-30No1.112No

Available for download two days after release.

Bug fixes and updates.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The server.tar.gz release artifact has been deprecated. Users should switch to the OS/architecture-specific server-linux_x64.tar.gz or server-alpine_x64.tar.gz, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.

    We are making this change for the following reasons:

    • By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.

    • Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.

    • By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.

    The last release where server.tar.gz artifact is included will be 1.154.0.

  • The HUMIO_JVM_ARGS environment variable in the LogScale Launcher Script script will be removed in 1.154.0.

    The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Functions

    • Prior to LogScale v1.147, the array:length() function accepted a value in the array argument that did not contain brackets [ ] so that array:length("field") would always produce the result 0 (since there was no field named field). The function has now been updated to properly throw an exception if given a non-array field name in the array argument. Therefore, the function now requires the given array name to have [ ] brackets, since it only works on array fields.

New features and improvements

  • UI Changes

    • LogScale administrators can now set the default timezone for their users.

      For more information, see Setting Time Zone.

    • The design of the file editor for Lookup Files has been improved. The editor is now also more responsive and has support for tab navigation.

    • The Client type item in the Query details tab has been removed. Previously, Dashboard was incorrectly displayed as the value for both live dashboard and alert query types.

      For more information, see Query Monitor — Query Details.

  • Automation and Alerts

    • It is no longer possible to use @id as throttle field in filter alerts, as this has no effect. Any existing filter alerts with @id as throttle field will continue to run, but the next time the filter alert is updated, the throttle field must be changed or removed.

      For more information, see Field-Based Throttling.

  • GraphQL API

    • A new field named searchUsers has been added on the group() output type in graphql, which is used to search users in the group. The field also allows for pagination, ordering and sorting of the result set.

  • Configuration

    • The QueryBacktrackingLimit feature is now enabled by default. The default value for the max number of backtracks (number of times a single event can be processed) a query can do has been reduced to 2,000.

  • Ingestion

    • To avoid exporting redundant fields in the parsers, LogScale will now omit YAML fields with a null value when exporting YAML templates — even when such fields are contained inside a list. Omitting fields with a null value previously only happened for fields outside a list.

Fixed in this release

  • UI Changes

    • The settings used to disable automatic searching would not be respected when creating a new alert. This issue has now been fixed.

    • When Creating a File, saving an invalid .csv file was possible in the file editor. This wrong behavior has now been fixed.

  • Dashboards and Widgets

    • Shared dashboards created on the special humio-search-all view wouldn't load correctly. This issue has now been fixed.

  • Ingestion

    • Event Forwarding would fail silently if an error occurred while executing the query. This issue has now been fixed.

    • Event Forwarding using match() or lookup() with a missing file would continue to fail after the file was uploaded. This issue has now been fixed.

  • Log Collector

    • Queries that were nested too deeply would crash LogScale nodes. This issue has now been fixed.

  • Functions

    • writeJson() would write invalid JSON by not correctly quoting numbers starting with unary plus or ending with a trailing . (dot).

Known Issues

  • Queries

    • A known issue in the implementation of the match() function when using cidr option in the mode parameter, could cause a reduction in performance for the query, and block other queries from executing.

Falcon LogScale 1.150.1 GA (2024-08-15)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.150.1GA2024-08-15

Cloud

2025-09-30No1.112No

Available for download two days after release.

Bug fixes recommended for all customers.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The server.tar.gz release artifact has been deprecated. Users should switch to the OS/architecture-specific server-linux_x64.tar.gz or server-alpine_x64.tar.gz, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.

    We are making this change for the following reasons:

    • By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.

    • Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.

    • By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.

    The last release where server.tar.gz artifact is included will be 1.154.0.

  • The HUMIO_JVM_ARGS environment variable in the LogScale Launcher Script script will be removed in 1.154.0.

    The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Functions

    • Prior to LogScale v1.147, the array:length() function accepted a value in the array argument that did not contain brackets [ ] so that array:length("field") would always produce the result 0 (since there was no field named field). The function has now been updated to properly throw an exception if given a non-array field name in the array argument. Therefore, the function now requires the given array name to have [ ] brackets, since it only works on array fields.

Fixed in this release

  • Ingestion

    • Fixed an issue where queries with a large number of OR statements would crash the parser and cause a node to fail.

Known Issues

  • Queries

    • A known issue in the implementation of the match() function when using cidr option in the mode parameter, could cause a reduction in performance for the query, and block other queries from executing.

Falcon LogScale 1.150.0 GA (2024-08-06)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.150.0GA2024-08-06

Cloud

2025-09-30No1.112No

Available for download two days after release.

Bug fixes and updates.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The server.tar.gz release artifact has been deprecated. Users should switch to the OS/architecture-specific server-linux_x64.tar.gz or server-alpine_x64.tar.gz, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.

    We are making this change for the following reasons:

    • By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.

    • Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.

    • By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.

    The last release where server.tar.gz artifact is included will be 1.154.0.

  • The HUMIO_JVM_ARGS environment variable in the LogScale Launcher Script script will be removed in 1.154.0.

    The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Functions

    • Prior to LogScale v1.147, the array:length() function accepted a value in the array argument that did not contain brackets [ ] so that array:length("field") would always produce the result 0 (since there was no field named field). The function has now been updated to properly throw an exception if given a non-array field name in the array argument. Therefore, the function now requires the given array name to have [ ] brackets, since it only works on array fields.

New features and improvements

  • Installation and Deployment

    • The Docker containers have been configured to use the following environment variable values internally: DIRECTORY=/data/humio-data HUMIO_AUDITLOG_DIR=/data/logs HUMIO_DEBUGLOG_DIR=/data/logs JVM_LOG_DIR=/data/logs JVM_TMP_DIR=/data/humio-data/jvm-tmp This configuration replaces the following chains of internal symlinks, which have been removed:

      • /app/humio/humio/humio-data to /app/humio/humio-data

      • /app/humio/humio-data to /data/humio-data

      • /app/humio/humio/logs /app/humio/logs

      • /app/humio/logs to /data/logs

      This change is intended for allowing the tool scripts in /app/humio/humio/bin to work correctly, as they were previously failing due to the presence of dangling symlinks when invoked via docker run if nothing was mounted at /data.

  • UI Changes

    • Sections can now be created inside dashboards, allowing for grouping relevant content together to maintain a clean and organized layout, making it easier for users to find and analyze related information. Sections can contain data visualizations as well as Parameter Panels. Additionally, they offer more flexibility when using the Time Selector, enabling users to apply a time setting across multiple widgets.

      For more information, see Sections.

    • An organization administrator can now update a user's role on a repository or view from the Users page.

      For more information, see Manage User Roles.

  • Automation and Alerts

  • Storage

    • Support is implemented for returning a result over 1GB in size on the queryjobs endpoint. There is now a limit on the size of 8GB of the returned result. The limits on state sizes for queries remain unaltered, so the effect of this change is that some queries that previously failed returning their results due to reaching 1GB, even though the query completed, now work.

  • Functions

    • Matching on multiple rows with the match() query function is now supported. This functionality allows match() to emit multiple events, one for each matching row. The nrows parameter is used to specify the maximum number of rows to match on.

      For more information, see match().

Fixed in this release

  • Falcon Data Replicator

    • Testing new FDR feeds using s3 aliasing would fail for valid credentials. This issue has now been fixed.

  • UI Changes

    • The Organizations overview page has been fixed as the Volume column width within a specific organization could not be adjusted.

    • The display of Lookup Files metadata in the file editor for very long user names has now been fixed.

  • Storage

    • Throttling for bucket uploads/downloads could cause unintentionally harsh throttling of downloads in favor of running more uploads concurrently. This issue has now been fixed.

    • The throttling for segment rebalancing has been reworked, which should help rebalancing keep up without overwhelming the cluster.

Known Issues

  • Queries

    • A known issue in the implementation of the match() function when using cidr option in the mode parameter, could cause a reduction in performance for the query, and block other queries from executing.

Falcon LogScale 1.149.0 GA (2024-07-30)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.149.0GA2024-07-30

Cloud

2025-09-30No1.112No

Available for download two days after release.

Bug fixes and updates.

Removed

Items that have been removed as of this release.

Installation and Deployment

  • The previously deprecated jar distribution of LogScale (e.g. server-1.117.jar) is no longer published starting from this version. For more information, see Falcon LogScale 1.130.0 GA (2024-03-19).

  • The previously deprecated humio/kafka and humio/zookeeper Docker images are now removed and no longer published.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The server.tar.gz release artifact has been deprecated. Users should switch to the OS/architecture-specific server-linux_x64.tar.gz or server-alpine_x64.tar.gz, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.

    We are making this change for the following reasons:

    • By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.

    • Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.

    • By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.

    The last release where server.tar.gz artifact is included will be 1.154.0.

  • The HUMIO_JVM_ARGS environment variable in the LogScale Launcher Script script will be removed in 1.154.0.

    The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Functions

    • Prior to LogScale v1.147, the array:length() function accepted a value in the array argument that did not contain brackets [ ] so that array:length("field") would always produce the result 0 (since there was no field named field). The function has now been updated to properly throw an exception if given a non-array field name in the array argument. Therefore, the function now requires the given array name to have [ ] brackets, since it only works on array fields.

Upgrades

Changes that may occur or be required during an upgrade.

  • Installation and Deployment

    • The bundled JDK is upgraded to 22.0.2.

Fixed in this release

  • UI Changes

    • Fixing a visualization issue where the values in a multi-select combo box could overlap with the number of selected items.

    • When clicking to sort the Sessions based on Last active, the sorting was wrongly based on Login time instead. This issue has now been fixed.

  • Configuration

    • Make a value of 1 for BucketStorageUploadInfrequentThresholdDays dynamic configuration result in all uploads to bucket being subject to "S3 Intelligent-Tiering". Some installs want this as they apply versioning to their bucket, so even though the life span as a non-deleted object is short, the actual data remains for much longer in the bucket, and then tiering all objects saves on cost of storage for them. Objects below 128KB are never tiered in any case.

Falcon LogScale 1.148.0 Internal (2024-07-23)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.148.0Internal2024-07-23

Internal Only

2025-07-31No1.112No

Available for download two days after release.

Internal-only release.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The -XX:ActiveProcessorCount=n command-line option will be ignored if set. Users that need to configure the core count manually should set CORES=n environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.

      This change is scheduled for 1.148.0.

      For more information, see Configuring Available CPU Cores.

Removed

Items that have been removed as of this release.

API

  • The following previously deprecated KAFKA API endpoints have been removed:

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment

    • GET /api/v1/clusterconfig/kafka-queues/partition-assignment

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults

    • GET /api/v1/clusterconfig/kafka-queues/partition-assignment/id

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The server.tar.gz release artifact has been deprecated. Users should switch to the OS/architecture-specific server-linux_x64.tar.gz or server-alpine_x64.tar.gz, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.

    We are making this change for the following reasons:

    • By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.

    • Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.

    • By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.

    The last release where server.tar.gz artifact is included will be 1.154.0.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • The HUMIO_JVM_ARGS environment variable in the LogScale Launcher Script script will be removed in 1.154.0.

    The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Storage

    • Reduced the waiting time for redactEvents background jobs to complete.

      The background job will not complete until all mini-segments affected by the redaction have been merged into full segments. The job was pessimistically waiting for MAX_HOURS_SEGMENT_OPEN (30 days) before attempting the rewrite. This has been changed to wait for FLUSH_BLOCK_SECONDS (15 minutes) before attempting the rewrite, this means, while some mini-segments may not be rewritten for 30 days, it is uncommon. If a rewrite is attempted and encounters mini-segments, it is postponed and retried later.

      For more information, see Redact Events API.

  • Functions

    • Prior to LogScale v1.147, the array:length() function accepted a value in the array argument that did not contain brackets [ ] so that array:length("field") would always produce the result 0 (since there was no field named field). The function has now been updated to properly throw an exception if given a non-array field name in the array argument. Therefore, the function now requires the given array name to have [ ] brackets, since it only works on array fields.

New features and improvements

  • UI Changes

    • The Users page has been redesigned so that the Repository and view roles are displayed in a right hand side panel which opens when a repository or view is selected. The repository and views roles panel shows the roles that give permissions to the user for the selected repository or view, together with groups that apply to them and the corresponding query prefixes.

      For more information, see Manage Users.

  • Storage

    • The size of the queue for segments being uploaded to bucket storage has been increased. This reduces how often a scan global for changes is needed.

      For more information, see Bucket Storage.

  • Configuration

    • Adjusted launcher script handling of the CORES environment variable:

      If CORES is set, the launcher will now pass -XX:ActiveProcessorCount=$CORES to the JVM. If CORES is not set, the launcher will pass -XX:ActiveProcessorCount to the JVM with a value determined by the launcher. This ensures that the core count configured for LogScale is always same as the core count configured for internal JVM thread pools.

      -XX:ActiveProcessorCount will be ignored if passed directly via other environment variables, such as HUMIO_OPTS. Administrators currently configuring their clusters this way should remove -XX:ActiveProcessorCount from their variables and set CORES instead.

Fixed in this release

  • UI Changes

    • The dropdown menu for selecting fields used when exporting data to a CSV file was hidden behind the Export to file dialog. This issue has now been fixed.

  • Ingestion

    • A queryToRead field has been added to the filesUsed property of queryResult to read the data from a file used in a query.

      For more information, see Polling a Query Job.

Improvement

Falcon LogScale 1.147.0 GA (2024-07-16)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.147.0GA2024-07-16

Cloud

2025-09-30No1.112No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The -XX:ActiveProcessorCount=n command-line option will be ignored if set. Users that need to configure the core count manually should set CORES=n environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.

      This change is scheduled for 1.148.0.

      For more information, see Configuring Available CPU Cores.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The following API endpoints are deprecated and marked for removal in 1.148.0:

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment

    • GET /api/v1/clusterconfig/kafka-queues/partition-assignment

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults

    The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.

  • The server.tar.gz release artifact has been deprecated. Users should switch to the OS/architecture-specific server-linux_x64.tar.gz or server-alpine_x64.tar.gz, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.

    We are making this change for the following reasons:

    • By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.

    • Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.

    • By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.

    The last release where server.tar.gz artifact is included will be 1.154.0.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • The HUMIO_JVM_ARGS environment variable in the LogScale Launcher Script script will be removed in 1.154.0.

    The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Functions

    • Prior to LogScale v1.147, the array:length() function accepted a value in the array argument that did not contain brackets [ ] so that array:length("field") would always produce the result 0 (since there was no field named field). The function has now been updated to properly throw an exception if given a non-array field name in the array argument. Therefore, the function now requires the given array name to have [ ] brackets, since it only works on array fields.

New features and improvements

  • UI Changes

  • Automation and Alerts

    • Standard Alerts have been renamed to Legacy Alerts. It is recommended using Filter Alerts or Aggregate Alerts alerts instead of legacy alerts.

      For more information, see Alerts.

    • The following UI changes have been introduced for alerts:

      • The Alerts overview page now presents a table with search and filtering options.

      • An alert-specific version of the Search page is now available for creating and refining your query before saving it as an alert.

      • The alert's properties are opened in a side panel when creating or editing an alert.

      • In the side panel, the recommended alert type to choose is suggested based on the query.

      • For aggregate alerts, the side panel allows you to select the timestamp (@ingesttimestamp or @timestamp).

      For more information, see Creating Alerts, Alert Properties.

    • A new Disabled actions status is added and can be visible from the Alerts overview table. This status will be displayed when there is an alert (or scheduled search) with only disabled actions attached.

      For more information, see Alerts Overview.

    • A new aggregate alert type is introduced. The aggregate alert is now the recommended alert type for any queries containing aggregate functions. Like filter alerts, aggregate alerts use ingest timestamps and run back-to-back searches, guaranteeing at least once delivery to the actions for more robust results, even in case of ingest delays of up to 24 hours.

      For more information, see Aggregate Alerts.

  • Log Collector

    • RemoteUpdate version dialog has been improved, with the ability to cancel pending and scheduled updates.

Fixed in this release

  • Ingestion

    • When shutting down a node, the process that load files used by a parser would be stopped before the parser itself. This could lead to ingested events not being parsed. This issue has now been fixed.

  • Functions

    • parseXml() would sometimes only partially extract text elements when the text contained newline characters. This issue has now been fixed.

    • Live queries using Field Aliasing on a repository with Tag Groupings enabled could fail. This issue has now been fixed.

    • Long running queries using window() could end up never completing. This issue has now been fixed.

Falcon LogScale 1.146.0 GA (2024-07-09)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.146.0GA2024-07-09

Cloud

2025-09-30No1.112No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The -XX:ActiveProcessorCount=n command-line option will be ignored if set. Users that need to configure the core count manually should set CORES=n environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.

      This change is scheduled for 1.148.0.

      For more information, see Configuring Available CPU Cores.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The following API endpoints are deprecated and marked for removal in 1.148.0:

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment

    • GET /api/v1/clusterconfig/kafka-queues/partition-assignment

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults

    The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.

  • The server.tar.gz release artifact has been deprecated. Users should switch to the OS/architecture-specific server-linux_x64.tar.gz or server-alpine_x64.tar.gz, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.

    We are making this change for the following reasons:

    • By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.

    • Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.

    • By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.

    The last release where server.tar.gz artifact is included will be 1.154.0.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • The HUMIO_JVM_ARGS environment variable in the LogScale Launcher Script script will be removed in 1.154.0.

    The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

New features and improvements

  • Automation and Alerts

    • A maximum limit of 1 week has been added on the throttle period for Filter Alerts and Standard Alerts. Any existing alert with a higher throttle time will continue to run, but when edited, lowering the throttle time to 1 week at most will be required.

  • GraphQL API

    • The getFileContent() and newFile() GraphQL endpoint responses will change for empty files. The return type is still UploadedFileSnapshot!, but the lines field will be changed to return [] when the file is empty. Previously, the return value was a list containing an empty list [[]]. This change applies both for empty files, and when the provided filter string doesn't match any rows in the file.

  • Storage

    • An alternative S3 client is now available and enabled by default. It handles file uploads more efficiently, by setting the Content-MD5 header during upload thus allowing S3 to perform file validation instead of having LogScale do it via post-upload validation steps. This form of validation should work for all uploads, including when server-side encryption is enabled. The new S3 client only supports this validation mode, so setting the following variables will have no effect:

      In case of issues, the S3 client can be disabled by setting USE_AWS_SDK=false, which will set LogScale back to the previous default client. Should you need to do this, please reach out to Support to have the issue addressed, because the previous client will be deprecated and removed eventually.

  • API

Fixed in this release

  • UI Changes

    • The event histogram would not adhere to the timezone selected for the query.

  • GraphQL API

    • The getFileContent() GraphQL endpoint will now return an UploadedFileSnapshot! datatype with the field totalLinesCount: 0 when a file has no matches for a given filter string. Previously it would return the total number of lines in the file.

  • API

  • Functions

    • Parsing the empty string as a number could lead to errors causing the query to fail (in formatTime() function, for example). This issue has now been fixed.

Falcon LogScale 1.145.0 GA (2024-07-02)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.145.0GA2024-07-02

Cloud

2025-09-30No1.112No

Available for download two days after release.

Bug fixes and updates.

Breaking Changes

The following items create a breaking change in the behavior, response or operation of this release.

  • Functions

    • Calling the match() function with multiple columns now finds the last matching row in the file. This now aligns with the behavior of calling the same function with a single column.

      For more information, see match().

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The -XX:ActiveProcessorCount=n command-line option will be ignored if set. Users that need to configure the core count manually should set CORES=n environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.

      This change is scheduled for 1.148.0.

      For more information, see Configuring Available CPU Cores.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The following API endpoints are deprecated and marked for removal in 1.148.0:

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment

    • GET /api/v1/clusterconfig/kafka-queues/partition-assignment

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults

    The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.

  • The server.tar.gz release artifact has been deprecated. Users should switch to the OS/architecture-specific server-linux_x64.tar.gz or server-alpine_x64.tar.gz, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.

    We are making this change for the following reasons:

    • By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.

    • Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.

    • By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.

    The last release where server.tar.gz artifact is included will be 1.154.0.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • The HUMIO_JVM_ARGS environment variable in the LogScale Launcher Script script will be removed in 1.154.0.

    The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

New features and improvements

  • UI Changes

    • When exporting data to CSV, the Export to File dialog now offers the ability to select field names that are suggested based on the query results, or to select all fields in one click.

      For more information, see Exporting Data.

    • When a file is referenced in a query, the Search page now shows a new tab next to the Results and Events tabs, bearing the name of the uploaded file. Activating the file tab will fetch the contents of the file and will show them as a Table widget. Alternatively, if the file cannot be queried, a download link will be presented instead.

      For more information, see Creating a File.

  • Automation and Alerts

  • GraphQL API

    • The new startFromDateTime argument has been added to s3ConfigureArchiving GraphQL mutation. When set, S3Archiving does not consider segment files that have a start time that is before this point in time. This in particular allows enabling S3 archiving only from a point in time and going forward, without archiving all the older files too.

  • Configuration

    • A new dynamic configuration variable GraphQlDirectivesAmountLimit has been added to restrict how many GraphQL directives can be in a query. Valid values are integers from 5 to 1,000. The default value is 25.

  • Functions

    • The new query function text:contains() is introduced. The function tests if a specific substring is present within a given string. It takes two arguments: string and substring, both of which can be provided as plain text, field values, or results of an expression.

      For more information, see text:contains().

    • The new query function array:append() is introduced, used to append one or more values to an existing array, or to create a new array.

      For more information, see array:append().

Fixed in this release

  • UI Changes

    • A long list of large queries would break the queries' list appearing under the Recent tab by not being updatable. The limit to recent queries has now been set to 30.

      For more information, see Recalling Queries.

    • The dialog to quickly switch to another repository would open when pressing the undo hotkey on Windows machines. This wrong behavior has now been fixed.

    • It was not possible to sort by columns other than ID in the Cluster nodes table under the Operations UI menu. This issue has now been fixed.

  • Automation and Alerts

    • Actions would show up as scheduled searches and vice versa when viewing the contents of a package. This issue has now been fixed.

    • The read-only alert page would wrongly report that actions were being throttled when a filter alert had disabled throttling. This issue has now been fixed.

  • GraphQL API

    • The background processing underlying the redactEvents() mutation would fail if the filter included tags. This error has now been fixed.

  • Storage

    • Notifying to Global Database about file changes could be slow. This issue has now been fixed.

  • Dashboards and Widgets

    • Arguments for parameters no longer used in a deleted query could be submitted anyway when invoking a saved query that uses the same arguments, thus generating an error. This issue has now been fixed.

  • Ingestion

    • A wrong order of the output events for parsers have been fixed — the output now returns the correct event order.

Improvement

  • Storage

    • The global topic throughput has been improved for particular updates to segments in datasources with many segments.

      For more information, see Global Database.

    • Let segment merge span vary by +/- 10% of the configured value to avoid all segment targets switching to a new merge targets at the same point in time.

Falcon LogScale 1.144.0 GA (2024-06-25)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.144.0GA2024-06-25

Cloud

2025-09-30No1.112No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The -XX:ActiveProcessorCount=n command-line option will be ignored if set. Users that need to configure the core count manually should set CORES=n environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.

      This change is scheduled for 1.148.0.

      For more information, see Configuring Available CPU Cores.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The following API endpoints are deprecated and marked for removal in 1.148.0:

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment

    • GET /api/v1/clusterconfig/kafka-queues/partition-assignment

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults

    The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.

  • The server.tar.gz release artifact has been deprecated. Users should switch to the OS/architecture-specific server-linux_x64.tar.gz or server-alpine_x64.tar.gz, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.

    We are making this change for the following reasons:

    • By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.

    • Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.

    • By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.

    The last release where server.tar.gz artifact is included will be 1.154.0.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • The HUMIO_JVM_ARGS environment variable in the LogScale Launcher Script script will be removed in 1.154.0.

    The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Installation and Deployment

    • The default cleanup.policy for the transientChatter-events topic has been switched from compact to delete,compact. This change will not apply to existing clusters. Changing this setting to delete,compact via Kafka's command line tools is particularly recommended if transientChatter is taking up excessive space on disk, whereas it is less relevant in production environments where Kafka's disks tend to be large.

  • Configuration

    • When global publish to Kafka times out from digester threads, the system would initiate a failure shutdown. Instead, from this 1.144 version the system retries the publish to Global Database indefinitely for those specific global transactions that originate in a digester thread. If retries occur, these get logged with an error executeTransactionRetryingOnTimeout: unable to execute transaction for global, retrying.

New features and improvements

  • Automation and Alerts

    • Two new GraphQL fields have been added in the ScheduledSearch datatype:

      • lastExecuted will hold the timestamp of the end of the search interval on the last scheduled search run.

      • lastTriggered will hold the timestamp of the end of the search interval on the last scheduled search run that found results and triggered actions.

      These two new fields are now also displayed in the Scheduled Searches user interface.

      For more information, see Last Executed and Last Triggered Scheduled Search.

  • GraphQL API

    • The log line containing Executed GraphQL query in the humio repository, that is logged for every GraphQL call, now contains the name of the mutations and queries that are executed.

  • Storage

    • Support for bucket storage upload validation has changed. LogScale now supports the following three validation modes:

      • Checking the ETag HTTP response header on the upload response. This mode is the default, and can be opted out of via the BUCKET_STORAGE_IGNORE_ETAG_UPLOAD configuration parameter.

      • Checking the ETag HTTP response header on a HEAD request done for the uploaded file. This is the second preferred mode, and can be opted out of via the BUCKET_STORAGE_IGNORE_ETAG_AFTER_UPLOAD configuration parameter.

      • Downloading the file that was uploaded, in order to validate the checksum file. This mode is enabled if neither of the other modes are enabled.

      Previous validation modes that did not compare checksums have been removed, as they were not reliable indicators of the uploaded file integrity.

Fixed in this release

Falcon LogScale 1.143.0 GA (2024-06-18)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.143.0GA2024-06-18

Cloud

2025-09-30No1.112No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The -XX:ActiveProcessorCount=n command-line option will be ignored if set. Users that need to configure the core count manually should set CORES=n environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.

      This change is scheduled for 1.148.0.

      For more information, see Configuring Available CPU Cores.

Removed

Items that have been removed as of this release.

Other

  • Unnecessary digest-coordinator-changes and desired-digest-coordinator-changes metrics have been removed. Instead, the logging in the IngestPartitionCoordinator class has been improved, to allow monitoring of when reassignment of desired and current digesters happens — by searching for Wrote changes to desired digest partitions / Wrote changes to current digest partitions.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The following API endpoints are deprecated and marked for removal in 1.148.0:

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment

    • GET /api/v1/clusterconfig/kafka-queues/partition-assignment

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults

    The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.

  • The server.tar.gz release artifact has been deprecated. Users should switch to the OS/architecture-specific server-linux_x64.tar.gz or server-alpine_x64.tar.gz, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.

    We are making this change for the following reasons:

    • By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.

    • Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.

    • By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.

    The last release where server.tar.gz artifact is included will be 1.154.0.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • The HUMIO_JVM_ARGS environment variable in the LogScale Launcher Script script will be removed in 1.154.0.

    The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.

Upgrades

Changes that may occur or be required during an upgrade.

  • Installation and Deployment

    • The minimum version of Java compatible with LogScale is now 21. Docker users, and users installing the release artifacts that bundle the JDK, are not affected.

      It is recommended to switch to the release artifacts that bundle a JDK, because LogScale no longer supports bringing your own JDK as of release 1.138, see Falcon LogScale 1.138.0 GA (2024-05-14)

New features and improvements

  • Security

    • When extending Retention span or size, any segments that were marked for deletion — but where the files remain in the system — are automatically resurrected. How much data you reclaim via this depends on the backupAfterMillis configuration on the repository.

      For more information, see Audit Logging.

  • GraphQL API

    • The new environmentVariableUsage() GraphQL API has been introduced for listing non-secret environment variables used by a node. This is intended as an aid to help do configuration discovery when managing a large number of LogScale clusters.

    • The new concatenateQueries() GraphQL API has been introduced for programmatically concatenating multiple queries into one. This is intended to eliminate errors that might occur if queries are combined naively.

    • The preview tag has been removed from the following GraphQL mutations:

  • Functions

    • The match() function now supports matching on multiple pairs of fields and columns.

      For more information, see match().

Fixed in this release

  • UI Changes

    • In the Export to File dialog, when using the keyboard to switch between options, a different item than the one selected was highlighted. This issue has now been fixed.

  • Storage

    • Digest threads could fail to start digesting if global is very large, and if writing to global is slow. This issue has now been fixed.

Falcon LogScale 1.142.4 LTS (2024-12-17)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.142.4LTS2024-12-17

Cloud

2025-07-31No1.112No

Hide file hashes

Show file hashes

Download

These notes include entries from the following previous releases: 1.142.1, 1.142.3

Bug fixes and updates.

Breaking Changes

The following items create a breaking change in the behavior, response or operation of this release.

  • Functions

    • The limit parameter has been added to the rdns() function. It is controlled by dynamic configurations RdnsMaxLimit and RdnsDefaultLimit. This is a breaking change addition due to incidents caused by the large implicit limit used before.

      For more information, see rdns().

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The -XX:ActiveProcessorCount=n command-line option will be ignored if set. Users that need to configure the core count manually should set CORES=n environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.

      This change is scheduled for 1.148.0.

      For more information, see Configuring Available CPU Cores.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The following API endpoints are deprecated and marked for removal in 1.148.0:

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment

    • GET /api/v1/clusterconfig/kafka-queues/partition-assignment

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults

    The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.

  • The server.tar.gz release artifact has been deprecated. Users should switch to the OS/architecture-specific server-linux_x64.tar.gz or server-alpine_x64.tar.gz, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.

    We are making this change for the following reasons:

    • By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.

    • Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.

    • By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.

    The last release where server.tar.gz artifact is included will be 1.154.0.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • The HUMIO_JVM_ARGS environment variable in the LogScale Launcher Script script will be removed in 1.154.0.

    The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • API

    • It is no longer possible to revive a query by polling it after it has been stopped.

      For more information, see Running Query Jobs.

  • Other

    • LogScale deletes humiotmp directories when gracefully shut down, but this can cause tmp directories to leak if LogScale crashes. LogScale now also deletes these directories on startup.

Upgrades

Changes that may occur or be required during an upgrade.

  • Installation and Deployment

    • The bundled JDK is upgraded to 22.0.2.

    • The Kafka client has been upgraded to 3.7.0. The Kafka server version in the deprecated humio/kafka Docker image is also upgraded to 3.7.0.

    • Bundled JDK upgraded to 22.0.1.

    • The JDK has been upgraded to 23.0.1

New features and improvements

  • Installation and Deployment

    • Changing the NODE_ROLES of a host is now forbidden. A host will now crash if the role it is configured to have doesn't match what is listed in global for that host. People wishing to change the role of a host in a cluster should instead remove that host from the cluster by unregistering it, wipe the data directory of the host, and boot the node back into the cluster as if it were a completely new node. The node will be assigned a new vhost identifier when doing this.

    • Unused modules have been removed from the JDK bundled with LogScale releases, thus reducing the size of release artifacts.

  • UI Changes

    • Time zone data has been updated to IANA 2024a and has been trimmed to +/- 5 years from the release date of IANA 2024a.

    • Layout changes have been made in the Connections UI page.

      For more information, see Connections.

    • The maximum limit for saved query names has been set to 200 characters.

    • The warnings for numbers out of the browser's safe number range have been slightly modified.

      For more information, see Troubleshooting: UI Warning: The actual value is different from what is displayed.

    • A new Field list column type has been added in the Event List. It formats all fields in the event in key-value pairs by grouping a field list by prefix.

      For more information, see Column Properties.

  • Automation and Alerts

    • Scheduled Reports can now be created. Scheduled Reports generate reports directly from dashboards and send them to the selected email addresses on a regular schedule.

      For more information, see Scheduled PDF Reports.

    • Two new GraphQL fields have been added in the ScheduledSearch datatype:

      • lastExecuted will hold the timestamp of the end of the search interval on the last scheduled search run.

      • lastTriggered will hold the timestamp of the end of the search interval on the last scheduled search run that found results and triggered actions.

      These two new fields are now also displayed in the Scheduled Searches user interface.

      For more information, see Last Executed and Last Triggered Scheduled Search.

  • GraphQL API

    • A new unsetDynamicConfig GraphQL mutation is introduced to unset dynamic configurations.

    • Added a new GraphQL API generateParserFromTemplate() for decoding a parser YAML template without installing it.

  • API

    • Upgrade to the latest Jakarta Mail API to prevent a warning message from being logged about a missing mail configuration file.

    • Information about files used in a query is now added to the query result returned by the API.

  • Configuration

    • The EXACT_MATCH_LIMIT configuration has been removed. It is no longer needed, since files are limited by size instead of rows.

    • When UNSAFE_RELAX_MULTI_CLUSTER_PROTOCOL_VERSION_CHECK is set to ensure Multi-Cluster Compatibility Across Versions, attempting to search in clusters older than version 1.131.2 is not allowed and a UI message will now be displayed.

    • A new QueryBacktrackingLimit dynamic configuration is available through GraphQL as experimental. It allows to limit a query iterating over individual events too many times (which may happen with an excessive use of copyEvent(), join() and split() functions, or regex() with repeat-flags). The default for this limit is 3,000 and can be modified with the dynamic configuration. At present, the feature flag sets this limit off by default.

  • Dashboards and Widgets

    • A parameter panel widget type has been added to allow users to drag parameters from the top panel and into these panels. Also, a parameter width is now adjustable in the settings.

      For more information, see Parameter Panel Widget.

  • Ingestion

    • Self-hosted only: derived tags (like #repo) are now included when executing Event Forwarding Rules. These fields will be included in the forwarded events unless filtered by select() or drop(#repo) in the rule.

    • Audit logs related to Event Forwarders no longer include the properties of the event forwarder.

      Event forwarder disablement is now audit logged with type disable instead of enable.

    • The parser assertions can now be written and loaded to YAML files, using the V3 parser format.

  • Log Collector

    • Fleet Management now supports ephemeral hosts. If a collector is enrolled with the parameter --ephemeralTimeout, after being offline for the specified duration in hours it will disappear from the Fleet Overview interface and be unenrolled. The feature requires LogScale Collector version 1.7.0 or above.

    • Live and Historic options for Fleet Overview are introduced. When Live, the overview will show online collectors and continuously be updated with e.g. new CPU metrics or status changes. The Historic view will display all records of collectors for the last 30 days. In this case the overview will not be updated with new information.

      For more information, see Switching between Live and Historic overview.

  • Functions

    • The onlyTrue parameter has been added to the bitfield:extractFlags() query function, it allows to output only flags whose value is true.

      For more information, see bitfield:extractFlags().

    • array:filter has been fixed as performing a filter test on an array field outputted from this function would sometimes lead to no results.

    • The query editor now gives warnings about certain regex constructs that are valid but suboptimal. Specifically, quantified wildcards in the beginning or end of an (unanchored) regex.

    • Multi-valued arguments can now be passed to a saved query.

      For more information, see User Functions (Saved Searches).

  • Other

    • A new metric max_ingest_delay is introduced to keep track of the current maximum ingest delay across all Kafka partitions.

    • Two new metrics have been introduced:

      • internal-throttled-poll-rate keeps track of the number of times polling workers during query execution was throttled due to rate limiting.

      • internal-throttled-poll-wait-time keeps track of maximum delays per poll round due to rate limiting.

Fixed in this release

  • Storage

    • Taking nodes offline in a cluster that does not use bucket storage could prevent cleanup of mini-segments associated with merge targets owned by the offline nodes, causing global to grow. To solve this, the cluster now moves merge targets that have not yet achieved full replication to follow digest nodes.

    • The Did not query segment error spuriously appearing when the cluster performs digest reassignment has now been fixed.

    • The file synchronization job would stop if upload to bucket storage fails. This issue has been fixed.

  • Dashboards and Widgets

    • Shared dashboards created on the special humio-search-all view wouldn't load correctly. This issue has now been fixed.

    • The execution of dashboard parameter queries has been changed to only run as live when the dashboard itself is live.

    • Dragging a parameter to an empty Parameter Panel Widget would sometimes not move the parameter. This issue has been fixed.

  • Functions

    • The query editor has been fixed as field auto-completions would sometimes not be suggested.

    • The query editor would mark the entire query as erroneous when count() was given with distinct=true parameter but missing an argument for the field parameter. This issue has been fixed.

    • Live queries using Field Aliasing on a repository with Tag Groupings enabled could fail. This issue has now been fixed.

    • The time:xxx() functions have been fixed as they did not correctly use the query's time zone as default. The offset was applied in an opposite manner, such that for example GMT+2 was applied as GMT-2. This has now been fixed.

  • Other

    • A regression introduced in version 1.132 has been fixed, where a file name starting with shared/ would be recognized as a shared file instead of a regular file. However, a shared file should be referred to using exactly /shared/ as a prefix.

    • Fixing a very rare edge case that could cause creation of malformed entities in global when a nested entity — such as a datasource — was deleted.

Improvement

  • UI Changes

    • When a saved query is used, the query editor will display the query string when hovering over it.

  • Storage

    • Logging improvements have been made around bucket uploads to assist with troubleshooting slow uploads, which are only seen in clusters with very large data sets.

  • Packages

    • Validate that there are no duplicate names used for each package template type during package installations (for example you cannot use the same name for multiple parsers that are part of the same package).

Falcon LogScale 1.142.3 LTS (2024-08-23)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.142.3LTS2024-08-23

Cloud

2025-07-31No1.112No

Hide file hashes

Show file hashes

Download

These notes include entries from the following previous releases: 1.142.1

Bug fixes and updates.

Breaking Changes

The following items create a breaking change in the behavior, response or operation of this release.

  • Functions

    • The limit parameter has been added to the rdns() function. It is controlled by dynamic configurations RdnsMaxLimit and RdnsDefaultLimit. This is a breaking change addition due to incidents caused by the large implicit limit used before.

      For more information, see rdns().

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The -XX:ActiveProcessorCount=n command-line option will be ignored if set. Users that need to configure the core count manually should set CORES=n environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.

      This change is scheduled for 1.148.0.

      For more information, see Configuring Available CPU Cores.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The following API endpoints are deprecated and marked for removal in 1.148.0:

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment

    • GET /api/v1/clusterconfig/kafka-queues/partition-assignment

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults

    The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.

  • The server.tar.gz release artifact has been deprecated. Users should switch to the OS/architecture-specific server-linux_x64.tar.gz or server-alpine_x64.tar.gz, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.

    We are making this change for the following reasons:

    • By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.

    • Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.

    • By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.

    The last release where server.tar.gz artifact is included will be 1.154.0.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • The HUMIO_JVM_ARGS environment variable in the LogScale Launcher Script script will be removed in 1.154.0.

    The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • API

    • It is no longer possible to revive a query by polling it after it has been stopped.

      For more information, see Running Query Jobs.

  • Other

    • LogScale deletes humiotmp directories when gracefully shut down, but this can cause tmp directories to leak if LogScale crashes. LogScale now also deletes these directories on startup.

Upgrades

Changes that may occur or be required during an upgrade.

  • Installation and Deployment

    • The bundled JDK is upgraded to 22.0.2.

    • The Kafka client has been upgraded to 3.7.0. The Kafka server version in the deprecated humio/kafka Docker image is also upgraded to 3.7.0.

    • Bundled JDK upgraded to 22.0.1.

New features and improvements

  • Installation and Deployment

    • Changing the NODE_ROLES of a host is now forbidden. A host will now crash if the role it is configured to have doesn't match what is listed in global for that host. People wishing to change the role of a host in a cluster should instead remove that host from the cluster by unregistering it, wipe the data directory of the host, and boot the node back into the cluster as if it were a completely new node. The node will be assigned a new vhost identifier when doing this.

    • Unused modules have been removed from the JDK bundled with LogScale releases, thus reducing the size of release artifacts.

  • UI Changes

    • Time zone data has been updated to IANA 2024a and has been trimmed to +/- 5 years from the release date of IANA 2024a.

    • Layout changes have been made in the Connections UI page.

      For more information, see Connections.

    • The maximum limit for saved query names has been set to 200 characters.

    • The warnings for numbers out of the browser's safe number range have been slightly modified.

      For more information, see Troubleshooting: UI Warning: The actual value is different from what is displayed.

    • A new Field list column type has been added in the Event List. It formats all fields in the event in key-value pairs by grouping a field list by prefix.

      For more information, see Column Properties.

  • Automation and Alerts

    • Scheduled Reports can now be created. Scheduled Reports generate reports directly from dashboards and send them to the selected email addresses on a regular schedule.

      For more information, see Scheduled PDF Reports.

    • Two new GraphQL fields have been added in the ScheduledSearch datatype:

      • lastExecuted will hold the timestamp of the end of the search interval on the last scheduled search run.

      • lastTriggered will hold the timestamp of the end of the search interval on the last scheduled search run that found results and triggered actions.

      These two new fields are now also displayed in the Scheduled Searches user interface.

      For more information, see Last Executed and Last Triggered Scheduled Search.

  • GraphQL API

    • A new unsetDynamicConfig GraphQL mutation is introduced to unset dynamic configurations.

    • Added a new GraphQL API generateParserFromTemplate() for decoding a parser YAML template without installing it.

  • API

    • Upgrade to the latest Jakarta Mail API to prevent a warning message from being logged about a missing mail configuration file.

    • Information about files used in a query is now added to the query result returned by the API.

  • Configuration

    • The EXACT_MATCH_LIMIT configuration has been removed. It is no longer needed, since files are limited by size instead of rows.

    • When UNSAFE_RELAX_MULTI_CLUSTER_PROTOCOL_VERSION_CHECK is set to ensure Multi-Cluster Compatibility Across Versions, attempting to search in clusters older than version 1.131.2 is not allowed and a UI message will now be displayed.

    • A new QueryBacktrackingLimit dynamic configuration is available through GraphQL as experimental. It allows to limit a query iterating over individual events too many times (which may happen with an excessive use of copyEvent(), join() and split() functions, or regex() with repeat-flags). The default for this limit is 3,000 and can be modified with the dynamic configuration. At present, the feature flag sets this limit off by default.

  • Dashboards and Widgets

    • A parameter panel widget type has been added to allow users to drag parameters from the top panel and into these panels. Also, a parameter width is now adjustable in the settings.

      For more information, see Parameter Panel Widget.

  • Ingestion

    • Self-hosted only: derived tags (like #repo) are now included when executing Event Forwarding Rules. These fields will be included in the forwarded events unless filtered by select() or drop(#repo) in the rule.

    • Audit logs related to Event Forwarders no longer include the properties of the event forwarder.

      Event forwarder disablement is now audit logged with type disable instead of enable.

    • The parser assertions can now be written and loaded to YAML files, using the V3 parser format.

  • Log Collector

    • Fleet Management now supports ephemeral hosts. If a collector is enrolled with the parameter --ephemeralTimeout, after being offline for the specified duration in hours it will disappear from the Fleet Overview interface and be unenrolled. The feature requires LogScale Collector version 1.7.0 or above.

    • Live and Historic options for Fleet Overview are introduced. When Live, the overview will show online collectors and continuously be updated with e.g. new CPU metrics or status changes. The Historic view will display all records of collectors for the last 30 days. In this case the overview will not be updated with new information.

      For more information, see Switching between Live and Historic overview.

  • Functions

    • The onlyTrue parameter has been added to the bitfield:extractFlags() query function, it allows to output only flags whose value is true.

      For more information, see bitfield:extractFlags().

    • array:filter has been fixed as performing a filter test on an array field outputted from this function would sometimes lead to no results.

    • The query editor now gives warnings about certain regex constructs that are valid but suboptimal. Specifically, quantified wildcards in the beginning or end of an (unanchored) regex.

    • Multi-valued arguments can now be passed to a saved query.

      For more information, see User Functions (Saved Searches).

  • Other

    • A new metric max_ingest_delay is introduced to keep track of the current maximum ingest delay across all Kafka partitions.

    • Two new metrics have been introduced:

      • internal-throttled-poll-rate keeps track of the number of times polling workers during query execution was throttled due to rate limiting.

      • internal-throttled-poll-wait-time keeps track of maximum delays per poll round due to rate limiting.

Fixed in this release

  • Storage

    • Taking nodes offline in a cluster that does not use bucket storage could prevent cleanup of mini-segments associated with merge targets owned by the offline nodes, causing global to grow. To solve this, the cluster now moves merge targets that have not yet achieved full replication to follow digest nodes.

    • The Did not query segment error spuriously appearing when the cluster performs digest reassignment has now been fixed.

    • The file synchronization job would stop if upload to bucket storage fails. This issue has been fixed.

  • Dashboards and Widgets

    • Shared dashboards created on the special humio-search-all view wouldn't load correctly. This issue has now been fixed.

    • The execution of dashboard parameter queries has been changed to only run as live when the dashboard itself is live.

    • Dragging a parameter to an empty Parameter Panel Widget would sometimes not move the parameter. This issue has been fixed.

  • Functions

    • The query editor has been fixed as field auto-completions would sometimes not be suggested.

    • The query editor would mark the entire query as erroneous when count() was given with distinct=true parameter but missing an argument for the field parameter. This issue has been fixed.

    • Live queries using Field Aliasing on a repository with Tag Groupings enabled could fail. This issue has now been fixed.

    • The time:xxx() functions have been fixed as they did not correctly use the query's time zone as default. The offset was applied in an opposite manner, such that for example GMT+2 was applied as GMT-2. This has now been fixed.

  • Other

    • A regression introduced in version 1.132 has been fixed, where a file name starting with shared/ would be recognized as a shared file instead of a regular file. However, a shared file should be referred to using exactly /shared/ as a prefix.

    • Fixing a very rare edge case that could cause creation of malformed entities in global when a nested entity — such as a datasource — was deleted.

Improvement

  • UI Changes

    • When a saved query is used, the query editor will display the query string when hovering over it.

  • Storage

    • Logging improvements have been made around bucket uploads to assist with troubleshooting slow uploads, which are only seen in clusters with very large data sets.

  • Packages

    • Validate that there are no duplicate names used for each package template type during package installations (for example you cannot use the same name for multiple parsers that are part of the same package).

Falcon LogScale 1.142.2 Internal (2024-07-09)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.142.2Internal2024-07-09

Internal Only

2025-07-31No1.112No

Available for download two days after release.

Internal-only release.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The -XX:ActiveProcessorCount=n command-line option will be ignored if set. Users that need to configure the core count manually should set CORES=n environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.

      This change is scheduled for 1.148.0.

      For more information, see Configuring Available CPU Cores.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The following API endpoints are deprecated and marked for removal in 1.148.0:

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment

    • GET /api/v1/clusterconfig/kafka-queues/partition-assignment

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults

    The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.

  • The server.tar.gz release artifact has been deprecated. Users should switch to the OS/architecture-specific server-linux_x64.tar.gz or server-alpine_x64.tar.gz, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.

    We are making this change for the following reasons:

    • By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.

    • Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.

    • By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.

    The last release where server.tar.gz artifact is included will be 1.154.0.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • The HUMIO_JVM_ARGS environment variable in the LogScale Launcher Script script will be removed in 1.154.0.

    The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.

Falcon LogScale 1.142.1 LTS (2024-07-03)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.142.1LTS2024-07-03

Cloud

2025-07-31No1.112No

Hide file hashes

Show file hashes

Download

Bug fixes and updates.

Breaking Changes

The following items create a breaking change in the behavior, response or operation of this release.

  • Functions

    • The limit parameter has been added to the rdns() function. It is controlled by dynamic configurations RdnsMaxLimit and RdnsDefaultLimit. This is a breaking change addition due to incidents caused by the large implicit limit used before.

      For more information, see rdns().

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The -XX:ActiveProcessorCount=n command-line option will be ignored if set. Users that need to configure the core count manually should set CORES=n environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.

      This change is scheduled for 1.148.0.

      For more information, see Configuring Available CPU Cores.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The following API endpoints are deprecated and marked for removal in 1.148.0:

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment

    • GET /api/v1/clusterconfig/kafka-queues/partition-assignment

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults

    The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.

  • The server.tar.gz release artifact has been deprecated. Users should switch to the OS/architecture-specific server-linux_x64.tar.gz or server-alpine_x64.tar.gz, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.

    We are making this change for the following reasons:

    • By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.

    • Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.

    • By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.

    The last release where server.tar.gz artifact is included will be 1.154.0.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • The HUMIO_JVM_ARGS environment variable in the LogScale Launcher Script script will be removed in 1.154.0.

    The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • API

    • It is no longer possible to revive a query by polling it after it has been stopped.

      For more information, see Running Query Jobs.

  • Other

    • LogScale deletes humiotmp directories when gracefully shut down, but this can cause tmp directories to leak if LogScale crashes. LogScale now also deletes these directories on startup.

Upgrades

Changes that may occur or be required during an upgrade.

  • Installation and Deployment

    • The Kafka client has been upgraded to 3.7.0. The Kafka server version in the deprecated humio/kafka Docker image is also upgraded to 3.7.0.

    • Bundled JDK upgraded to 22.0.1.

New features and improvements

  • Installation and Deployment

    • Changing the NODE_ROLES of a host is now forbidden. A host will now crash if the role it is configured to have doesn't match what is listed in global for that host. People wishing to change the role of a host in a cluster should instead remove that host from the cluster by unregistering it, wipe the data directory of the host, and boot the node back into the cluster as if it were a completely new node. The node will be assigned a new vhost identifier when doing this.

    • Unused modules have been removed from the JDK bundled with LogScale releases, thus reducing the size of release artifacts.

  • UI Changes

    • Time zone data has been updated to IANA 2024a and has been trimmed to +/- 5 years from the release date of IANA 2024a.

    • Layout changes have been made in the Connections UI page.

      For more information, see Connections.

    • The maximum limit for saved query names has been set to 200 characters.

    • The warnings for numbers out of the browser's safe number range have been slightly modified.

      For more information, see Troubleshooting: UI Warning: The actual value is different from what is displayed.

    • A new Field list column type has been added in the Event List. It formats all fields in the event in key-value pairs by grouping a field list by prefix.

      For more information, see Column Properties.

  • Automation and Alerts

    • Scheduled Reports can now be created. Scheduled Reports generate reports directly from dashboards and send them to the selected email addresses on a regular schedule.

      For more information, see Scheduled PDF Reports.

  • GraphQL API

    • A new unsetDynamicConfig GraphQL mutation is introduced to unset dynamic configurations.

    • Added a new GraphQL API generateParserFromTemplate() for decoding a parser YAML template without installing it.

  • API

    • Upgrade to the latest Jakarta Mail API to prevent a warning message from being logged about a missing mail configuration file.

    • Information about files used in a query is now added to the query result returned by the API.

  • Configuration

    • The EXACT_MATCH_LIMIT configuration has been removed. It is no longer needed, since files are limited by size instead of rows.

    • When UNSAFE_RELAX_MULTI_CLUSTER_PROTOCOL_VERSION_CHECK is set to ensure Multi-Cluster Compatibility Across Versions, attempting to search in clusters older than version 1.131.2 is not allowed and a UI message will now be displayed.

    • A new QueryBacktrackingLimit dynamic configuration is available through GraphQL as experimental. It allows to limit a query iterating over individual events too many times (which may happen with an excessive use of copyEvent(), join() and split() functions, or regex() with repeat-flags). The default for this limit is 3,000 and can be modified with the dynamic configuration. At present, the feature flag sets this limit off by default.

  • Dashboards and Widgets

    • A parameter panel widget type has been added to allow users to drag parameters from the top panel and into these panels. Also, a parameter width is now adjustable in the settings.

      For more information, see Parameter Panel Widget.

  • Ingestion

    • Self-hosted only: derived tags (like #repo) are now included when executing Event Forwarding Rules. These fields will be included in the forwarded events unless filtered by select() or drop(#repo) in the rule.

    • Audit logs related to Event Forwarders no longer include the properties of the event forwarder.

      Event forwarder disablement is now audit logged with type disable instead of enable.

    • The parser assertions can now be written and loaded to YAML files, using the V3 parser format.

  • Log Collector

    • Fleet Management now supports ephemeral hosts. If a collector is enrolled with the parameter --ephemeralTimeout, after being offline for the specified duration in hours it will disappear from the Fleet Overview interface and be unenrolled. The feature requires LogScale Collector version 1.7.0 or above.

    • Live and Historic options for Fleet Overview are introduced. When Live, the overview will show online collectors and continuously be updated with e.g. new CPU metrics or status changes. The Historic view will display all records of collectors for the last 30 days. In this case the overview will not be updated with new information.

      For more information, see Switching between Live and Historic overview.

  • Functions

    • The onlyTrue parameter has been added to the bitfield:extractFlags() query function, it allows to output only flags whose value is true.

      For more information, see bitfield:extractFlags().

    • array:filter has been fixed as performing a filter test on an array field outputted from this function would sometimes lead to no results.

    • The query editor now gives warnings about certain regex constructs that are valid but suboptimal. Specifically, quantified wildcards in the beginning or end of an (unanchored) regex.

    • Multi-valued arguments can now be passed to a saved query.

      For more information, see User Functions (Saved Searches).

  • Other

    • A new metric max_ingest_delay is introduced to keep track of the current maximum ingest delay across all Kafka partitions.

    • Two new metrics have been introduced:

      • internal-throttled-poll-rate keeps track of the number of times polling workers during query execution was throttled due to rate limiting.

      • internal-throttled-poll-wait-time keeps track of maximum delays per poll round due to rate limiting.

Fixed in this release

  • Storage

    • Taking nodes offline in a cluster that does not use bucket storage could prevent cleanup of mini-segments associated with merge targets owned by the offline nodes, causing global to grow. To solve this, the cluster now moves merge targets that have not yet achieved full replication to follow digest nodes.

    • The Did not query segment error spuriously appearing when the cluster performs digest reassignment has now been fixed.

    • The file synchronization job would stop if upload to bucket storage fails. This issue has been fixed.

  • Dashboards and Widgets

    • The execution of dashboard parameter queries has been changed to only run as live when the dashboard itself is live.

    • Dragging a parameter to an empty Parameter Panel Widget would sometimes not move the parameter. This issue has been fixed.

  • Functions

    • The query editor has been fixed as field auto-completions would sometimes not be suggested.

    • The query editor would mark the entire query as erroneous when count() was given with distinct=true parameter but missing an argument for the field parameter. This issue has been fixed.

    • The time:xxx() functions have been fixed as they did not correctly use the query's time zone as default. The offset was applied in an opposite manner, such that for example GMT+2 was applied as GMT-2. This has now been fixed.

  • Other

    • A regression introduced in version 1.132 has been fixed, where a file name starting with shared/ would be recognized as a shared file instead of a regular file. However, a shared file should be referred to using exactly /shared/ as a prefix.

    • Fixing a very rare edge case that could cause creation of malformed entities in global when a nested entity — such as a datasource — was deleted.

Improvement

  • UI Changes

    • When a saved query is used, the query editor will display the query string when hovering over it.

  • Storage

    • Logging improvements have been made around bucket uploads to assist with troubleshooting slow uploads, which are only seen in clusters with very large data sets.

  • Packages

    • Validate that there are no duplicate names used for each package template type during package installations (for example you cannot use the same name for multiple parsers that are part of the same package).

Falcon LogScale 1.142.0 GA (2024-06-11)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.142.0GA2024-06-11

Cloud

2025-07-31No1.112No

Available for download two days after release.

Bug fixes and updates.

Breaking Changes

The following items create a breaking change in the behavior, response or operation of this release.

  • Functions

    • The any argument in sort() has been removed. Queries where any is explicitly set will be rejected. Please change the argument to either number, hex or string, depending on which option is the best fit for the data your query operates on.

    • The following changes have been made to sort():

      • It will no longer try to guess the type of the field values and instead default to number.

      • The number and hex options have been redefined to be total orders: values of the given type are sorted according to their natural order and those that could not be understood as the given type are sorted lexicographically. For instance, sorting the values 10, 100, 20, bcd, cde, abc in an ascending order with number will be rendered as: 10, 20, 100, abc, bcd, cde

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The -XX:ActiveProcessorCount=n command-line option will be ignored if set. Users that need to configure the core count manually should set CORES=n environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.

      This change is scheduled for 1.148.0.

      For more information, see Configuring Available CPU Cores.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The following API endpoints are deprecated and marked for removal in 1.148.0:

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment

    • GET /api/v1/clusterconfig/kafka-queues/partition-assignment

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults

    The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.

  • The server.tar.gz release artifact has been deprecated. Users should switch to the OS/architecture-specific server-linux_x64.tar.gz or server-alpine_x64.tar.gz, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.

    We are making this change for the following reasons:

    • By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.

    • Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.

    • By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.

    The last release where server.tar.gz artifact is included will be 1.154.0.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • The HUMIO_JVM_ARGS environment variable in the LogScale Launcher Script script will be removed in 1.154.0.

    The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Storage

    • When a digest leader exceeds the PRIMARY_STORAGE_MAX_FILL_PERCENTAGE, instead of pausing by releasing leadership of all partitions, it'll pause while holding on to leadership.

New features and improvements

  • Security

    • The new ManageViewConnections Organization Administration permission has been added. It grants access to:

      • List all views and repositories

      • Create views linked to any repository

      • Update Connections of any existing view.

  • Installation and Deployment

    • NUMA support for the Docker images is now enabled:

      • The launcher script has been updated to set -XX:+UseNUMA in the default HUMIO_JVM_PERFORMANCE_OPTS.

      • The Docker images have been updated to include libnuma.so.1, which allows the JDK to optimize for NUMA hardware.

  • Dashboards and Widgets

    • Widget-level time selection can now be adjusted when a dashboard is used in view mode. This change adds flexibility in working with time on the dashboard and allows for easy comparative analysis on the fly.

      For more information, see Widget Time Selector.

Fixed in this release

  • Storage

    • A fix has been made to reduce contention on loading decompressMeta in segment files, resulting in performance improvement.

    • Pending merges of segments would contend with the verification of segments being transferred between nodes/bucket. This resulted in spuriously long transfer times, due to queueing of the verification step for the segment file. This issue has now been fixed.

Improvement

  • Storage

    • The amount of work required for the local segment verifier at boot of nodes has been reduced.

Falcon LogScale 1.141.0 GA (2024-06-04)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.141.0GA2024-06-04

Cloud

2025-07-31No1.112No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The -XX:ActiveProcessorCount=n command-line option will be ignored if set. Users that need to configure the core count manually should set CORES=n environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.

      This change is scheduled for 1.148.0.

      For more information, see Configuring Available CPU Cores.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The any argument to the type parameter of sort() and table() has been deprecated and will be removed in version 1.142.

    Warnings prompts will be shown in queries that fall into either of these two cases:

    • If you are explicitly supplying an any argument, please either simply remove both the parameter and the argument, for example change sort(..., type=any) to sort(...) or supply the argument for type that corresponds to your data.

    • If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of type parameter to hex e.g. sort(..., type=hex).

    In all other cases, no action is needed.

    The new default value for sort() and table() will be number. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument for type.

  • The following API endpoints are deprecated and marked for removal in 1.148.0:

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment

    • GET /api/v1/clusterconfig/kafka-queues/partition-assignment

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults

    The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.

  • The server.tar.gz release artifact has been deprecated. Users should switch to the OS/architecture-specific server-linux_x64.tar.gz or server-alpine_x64.tar.gz, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.

    We are making this change for the following reasons:

    • By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.

    • Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.

    • By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.

    The last release where server.tar.gz artifact is included will be 1.154.0.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • The HUMIO_JVM_ARGS environment variable in the LogScale Launcher Script script will be removed in 1.154.0.

    The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.

  • The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.

    • The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.

      • force field is renamed to allowOverwritingExistingParser.

      • sourceCode field is renamed to script.

      • tagFields field is renamed to fieldsToTag.

      • languageVersion is no longer an enum, but a LanguageVersionInputType instead.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    • The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:

      • The mutation returns boolean to represent success or failure, instead of a Parser wrapped in an object.

    • The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:

      • The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • The new test cases can contain assertions about the contents of the output.

      • The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.

      • The mutation now accepts both a language version and list of fields to be removed before parsing.

      • The parserScript field is renamed to script.

      • The tagFields field is renamed to fieldsToTag.

    • The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • sourceCode field, used to updating the parser script, is changed to the script field, which takes a UpdateParserScriptInput object. This updates the parser script and the language version together.

      • tagFields field is renamed to fieldsToTag.

      • The languageVersion is located inside the UpdateParserScriptInput object, and is no longer an enum, but a LanguageVersionInputType instead.

      • The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    On the Parser type:

    • testData field is deprecated and replaced by testCases.

    • sourceCode field is deprecated and replaced by script.

    • tagFields field is deprecated and replaced by fieldsToTag.

    For more information, see Parser , DeleteParserInput , LanguageVersionInputType , createParserV2() , testParserV2() , updateParserV2() .

Upgrades

Changes that may occur or be required during an upgrade.

  • Installation and Deployment

    • Bundled JDK upgraded to 22.0.1.

New features and improvements

  • API

    • Upgrade to the latest Jakarta Mail API to prevent a warning message from being logged about a missing mail configuration file.

  • Configuration

    • When UNSAFE_RELAX_MULTI_CLUSTER_PROTOCOL_VERSION_CHECK is set to ensure Multi-Cluster Compatibility Across Versions, attempting to search in clusters older than version 1.131.2 is not allowed and a UI message will now be displayed.

Fixed in this release

  • Storage

    • The Did not query segment error spuriously appearing when the cluster performs digest reassignment has now been fixed.

  • Dashboards and Widgets

    • Dragging a parameter to an empty Parameter Panel Widget would sometimes not move the parameter. This issue has been fixed.

  • Functions

    • The time:xxx() functions have been fixed as they did not correctly use the query's time zone as default. The offset was applied in an opposite manner, such that for example GMT+2 was applied as GMT-2. This has now been fixed.

  • Other

    • A regression introduced in version 1.132 has been fixed, where a file name starting with shared/ would be recognized as a shared file instead of a regular file. However, a shared file should be referred to using exactly /shared/ as a prefix.

Improvement

  • Packages

    • Validate that there are no duplicate names used for each package template type during package installations (for example you cannot use the same name for multiple parsers that are part of the same package).

Falcon LogScale 1.140.0 GA (2024-05-28)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.140.0GA2024-05-28

Cloud

2025-07-31No1.112No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The -XX:ActiveProcessorCount=n command-line option will be ignored if set. Users that need to configure the core count manually should set CORES=n environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.

      This change is scheduled for 1.148.0.

      For more information, see Configuring Available CPU Cores.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The any argument to the type parameter of sort() and table() has been deprecated and will be removed in version 1.142.

    Warnings prompts will be shown in queries that fall into either of these two cases:

    • If you are explicitly supplying an any argument, please either simply remove both the parameter and the argument, for example change sort(..., type=any) to sort(...) or supply the argument for type that corresponds to your data.

    • If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of type parameter to hex e.g. sort(..., type=hex).

    In all other cases, no action is needed.

    The new default value for sort() and table() will be number. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument for type.

  • The following API endpoints are deprecated and marked for removal in 1.148.0:

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment

    • GET /api/v1/clusterconfig/kafka-queues/partition-assignment

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults

    The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.

  • The server.tar.gz release artifact has been deprecated. Users should switch to the OS/architecture-specific server-linux_x64.tar.gz or server-alpine_x64.tar.gz, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.

    We are making this change for the following reasons:

    • By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.

    • Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.

    • By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.

    The last release where server.tar.gz artifact is included will be 1.154.0.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • The HUMIO_JVM_ARGS environment variable in the LogScale Launcher Script script will be removed in 1.154.0.

    The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.

  • The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.

    • The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.

      • force field is renamed to allowOverwritingExistingParser.

      • sourceCode field is renamed to script.

      • tagFields field is renamed to fieldsToTag.

      • languageVersion is no longer an enum, but a LanguageVersionInputType instead.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    • The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:

      • The mutation returns boolean to represent success or failure, instead of a Parser wrapped in an object.

    • The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:

      • The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • The new test cases can contain assertions about the contents of the output.

      • The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.

      • The mutation now accepts both a language version and list of fields to be removed before parsing.

      • The parserScript field is renamed to script.

      • The tagFields field is renamed to fieldsToTag.

    • The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • sourceCode field, used to updating the parser script, is changed to the script field, which takes a UpdateParserScriptInput object. This updates the parser script and the language version together.

      • tagFields field is renamed to fieldsToTag.

      • The languageVersion is located inside the UpdateParserScriptInput object, and is no longer an enum, but a LanguageVersionInputType instead.

      • The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    On the Parser type:

    • testData field is deprecated and replaced by testCases.

    • sourceCode field is deprecated and replaced by script.

    • tagFields field is deprecated and replaced by fieldsToTag.

    For more information, see Parser , DeleteParserInput , LanguageVersionInputType , createParserV2() , testParserV2() , updateParserV2() .

New features and improvements

  • UI Changes

    • A new Field list column type has been added in the Event List. It formats all fields in the event in key-value pairs by grouping a field list by prefix.

      For more information, see Column Properties.

  • GraphQL API

    • Added a new GraphQL API generateParserFromTemplate() for decoding a parser YAML template without installing it.

  • API

    • Information about files used in a query is now added to the query result returned by the API.

  • Configuration

    • The EXACT_MATCH_LIMIT configuration has been removed. It is no longer needed, since files are limited by size instead of rows.

  • Functions

Falcon LogScale 1.139.0 GA (2024-05-21)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.139.0GA2024-05-21

Cloud

2025-07-31No1.112No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The -XX:ActiveProcessorCount=n command-line option will be ignored if set. Users that need to configure the core count manually should set CORES=n environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.

      This change is scheduled for 1.148.0.

      For more information, see Configuring Available CPU Cores.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The any argument to the type parameter of sort() and table() has been deprecated and will be removed in version 1.142.

    Warnings prompts will be shown in queries that fall into either of these two cases:

    • If you are explicitly supplying an any argument, please either simply remove both the parameter and the argument, for example change sort(..., type=any) to sort(...) or supply the argument for type that corresponds to your data.

    • If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of type parameter to hex e.g. sort(..., type=hex).

    In all other cases, no action is needed.

    The new default value for sort() and table() will be number. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument for type.

  • The following API endpoints are deprecated and marked for removal in 1.148.0:

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment

    • GET /api/v1/clusterconfig/kafka-queues/partition-assignment

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults

    The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.

  • The server.tar.gz release artifact has been deprecated. Users should switch to the OS/architecture-specific server-linux_x64.tar.gz or server-alpine_x64.tar.gz, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.

    We are making this change for the following reasons:

    • By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.

    • Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.

    • By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.

    The last release where server.tar.gz artifact is included will be 1.154.0.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • The HUMIO_JVM_ARGS environment variable in the LogScale Launcher Script script will be removed in 1.154.0.

    The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.

  • The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.

    • The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.

      • force field is renamed to allowOverwritingExistingParser.

      • sourceCode field is renamed to script.

      • tagFields field is renamed to fieldsToTag.

      • languageVersion is no longer an enum, but a LanguageVersionInputType instead.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    • The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:

      • The mutation returns boolean to represent success or failure, instead of a Parser wrapped in an object.

    • The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:

      • The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • The new test cases can contain assertions about the contents of the output.

      • The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.

      • The mutation now accepts both a language version and list of fields to be removed before parsing.

      • The parserScript field is renamed to script.

      • The tagFields field is renamed to fieldsToTag.

    • The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • sourceCode field, used to updating the parser script, is changed to the script field, which takes a UpdateParserScriptInput object. This updates the parser script and the language version together.

      • tagFields field is renamed to fieldsToTag.

      • The languageVersion is located inside the UpdateParserScriptInput object, and is no longer an enum, but a LanguageVersionInputType instead.

      • The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    On the Parser type:

    • testData field is deprecated and replaced by testCases.

    • sourceCode field is deprecated and replaced by script.

    • tagFields field is deprecated and replaced by fieldsToTag.

    For more information, see Parser , DeleteParserInput , LanguageVersionInputType , createParserV2() , testParserV2() , updateParserV2() .

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • API

    • It is no longer possible to revive a query by polling it after it has been stopped.

      For more information, see Running Query Jobs.

  • Other

    • LogScale deletes humiotmp directories when gracefully shut down, but this can cause tmp directories to leak if LogScale crashes. LogScale now also deletes these directories on startup.

New features and improvements

  • UI Changes

  • Configuration

    • A new QueryBacktrackingLimit dynamic configuration is available through GraphQL as experimental. It allows to limit a query iterating over individual events too many times (which may happen with an excessive use of copyEvent(), join() and split() functions, or regex() with repeat-flags). The default for this limit is 3,000 and can be modified with the dynamic configuration. At present, the feature flag sets this limit off by default.

  • Ingestion

    • Audit logs related to Event Forwarders no longer include the properties of the event forwarder.

      Event forwarder disablement is now audit logged with type disable instead of enable.

    • The parser assertions can now be written and loaded to YAML files, using the V3 parser format.

  • Functions

    • The onlyTrue parameter has been added to the bitfield:extractFlags() query function, it allows to output only flags whose value is true.

      For more information, see bitfield:extractFlags().

    • The query editor now gives warnings about certain regex constructs that are valid but suboptimal. Specifically, quantified wildcards in the beginning or end of an (unanchored) regex.

  • Other

    • Two new metrics have been introduced:

      • internal-throttled-poll-rate keeps track of the number of times polling workers during query execution was throttled due to rate limiting.

      • internal-throttled-poll-wait-time keeps track of maximum delays per poll round due to rate limiting.

Improvement

  • UI Changes

    • When a saved query is used, the query editor will display the query string when hovering over it.

Falcon LogScale 1.138.0 GA (2024-05-14)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.138.0GA2024-05-14

Cloud

2025-07-31No1.112No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The -XX:ActiveProcessorCount=n command-line option will be ignored if set. Users that need to configure the core count manually should set CORES=n environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.

      This change is scheduled for 1.148.0.

      For more information, see Configuring Available CPU Cores.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The any argument to the type parameter of sort() and table() has been deprecated and will be removed in version 1.142.

    Warnings prompts will be shown in queries that fall into either of these two cases:

    • If you are explicitly supplying an any argument, please either simply remove both the parameter and the argument, for example change sort(..., type=any) to sort(...) or supply the argument for type that corresponds to your data.

    • If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of type parameter to hex e.g. sort(..., type=hex).

    In all other cases, no action is needed.

    The new default value for sort() and table() will be number. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument for type.

  • The following API endpoints are deprecated and marked for removal in 1.148.0:

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment

    • GET /api/v1/clusterconfig/kafka-queues/partition-assignment

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults

    The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.

  • The server.tar.gz release artifact has been deprecated. Users should switch to the OS/architecture-specific server-linux_x64.tar.gz or server-alpine_x64.tar.gz, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.

    We are making this change for the following reasons:

    • By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.

    • Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.

    • By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.

    The last release where server.tar.gz artifact is included will be 1.154.0.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • The HUMIO_JVM_ARGS environment variable in the LogScale Launcher Script script will be removed in 1.154.0.

    The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.

  • The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.

    • The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.

      • force field is renamed to allowOverwritingExistingParser.

      • sourceCode field is renamed to script.

      • tagFields field is renamed to fieldsToTag.

      • languageVersion is no longer an enum, but a LanguageVersionInputType instead.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    • The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:

      • The mutation returns boolean to represent success or failure, instead of a Parser wrapped in an object.

    • The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:

      • The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • The new test cases can contain assertions about the contents of the output.

      • The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.

      • The mutation now accepts both a language version and list of fields to be removed before parsing.

      • The parserScript field is renamed to script.

      • The tagFields field is renamed to fieldsToTag.

    • The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • sourceCode field, used to updating the parser script, is changed to the script field, which takes a UpdateParserScriptInput object. This updates the parser script and the language version together.

      • tagFields field is renamed to fieldsToTag.

      • The languageVersion is located inside the UpdateParserScriptInput object, and is no longer an enum, but a LanguageVersionInputType instead.

      • The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    On the Parser type:

    • testData field is deprecated and replaced by testCases.

    • sourceCode field is deprecated and replaced by script.

    • tagFields field is deprecated and replaced by fieldsToTag.

    For more information, see Parser , DeleteParserInput , LanguageVersionInputType , createParserV2() , testParserV2() , updateParserV2() .

Upgrades

Changes that may occur or be required during an upgrade.

  • Installation and Deployment

    • The Kafka client has been upgraded to 3.7.0. The Kafka server version in the deprecated humio/kafka Docker image is also upgraded to 3.7.0.

New features and improvements

  • Installation and Deployment

    • Changing the NODE_ROLES of a host is now forbidden. A host will now crash if the role it is configured to have doesn't match what is listed in global for that host. People wishing to change the role of a host in a cluster should instead remove that host from the cluster by unregistering it, wipe the data directory of the host, and boot the node back into the cluster as if it were a completely new node. The node will be assigned a new vhost identifier when doing this.

    • Unused modules have been removed from the JDK bundled with LogScale releases, thus reducing the size of release artifacts.

  • UI Changes

    • Layout changes have been made in the Connections UI page.

      For more information, see Connections.

  • GraphQL API

    • A new unsetDynamicConfig GraphQL mutation is introduced to unset dynamic configurations.

  • Ingestion

    • Self-hosted only: derived tags (like #repo) are now included when executing Event Forwarding Rules. These fields will be included in the forwarded events unless filtered by select() or drop(#repo) in the rule.

  • Functions

    • array:filter has been fixed as performing a filter test on an array field outputted from this function would sometimes lead to no results.

  • Other

    • A new metric max_ingest_delay is introduced to keep track of the current maximum ingest delay across all Kafka partitions.

Fixed in this release

  • Storage

    • Taking nodes offline in a cluster that does not use bucket storage could prevent cleanup of mini-segments associated with merge targets owned by the offline nodes, causing global to grow. To solve this, the cluster now moves merge targets that have not yet achieved full replication to follow digest nodes.

    • The file synchronization job would stop if upload to bucket storage fails. This issue has been fixed.

  • Dashboards and Widgets

    • The execution of dashboard parameter queries has been changed to only run as live when the dashboard itself is live.

  • Other

    • Fixing a very rare edge case that could cause creation of malformed entities in global when a nested entity — such as a datasource — was deleted.

Improvement

  • Storage

    • Logging improvements have been made around bucket uploads to assist with troubleshooting slow uploads, which are only seen in clusters with very large data sets.

Falcon LogScale 1.137.0 GA (2024-05-07)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.137.0GA2024-05-07

Cloud

2025-07-31No1.112No

Available for download two days after release.

Bug fixes and updates.

Breaking Changes

The following items create a breaking change in the behavior, response or operation of this release.

  • Functions

    • The limit parameter has been added to the rdns() function. It is controlled by dynamic configurations RdnsMaxLimit and RdnsDefaultLimit. This is a breaking change addition due to incidents caused by the large implicit limit used before.

      For more information, see rdns().

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The -XX:ActiveProcessorCount=n command-line option will be ignored if set. Users that need to configure the core count manually should set CORES=n environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.

      This change is scheduled for 1.148.0.

      For more information, see Configuring Available CPU Cores.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The any argument to the type parameter of sort() and table() has been deprecated and will be removed in version 1.142.

    Warnings prompts will be shown in queries that fall into either of these two cases:

    • If you are explicitly supplying an any argument, please either simply remove both the parameter and the argument, for example change sort(..., type=any) to sort(...) or supply the argument for type that corresponds to your data.

    • If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of type parameter to hex e.g. sort(..., type=hex).

    In all other cases, no action is needed.

    The new default value for sort() and table() will be number. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument for type.

  • The following API endpoints are deprecated and marked for removal in 1.148.0:

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment

    • GET /api/v1/clusterconfig/kafka-queues/partition-assignment

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults

    The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • The HUMIO_JVM_ARGS environment variable in the LogScale Launcher Script script will be removed in 1.154.0.

    The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.

  • The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.

    • The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.

      • force field is renamed to allowOverwritingExistingParser.

      • sourceCode field is renamed to script.

      • tagFields field is renamed to fieldsToTag.

      • languageVersion is no longer an enum, but a LanguageVersionInputType instead.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    • The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:

      • The mutation returns boolean to represent success or failure, instead of a Parser wrapped in an object.

    • The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:

      • The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • The new test cases can contain assertions about the contents of the output.

      • The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.

      • The mutation now accepts both a language version and list of fields to be removed before parsing.

      • The parserScript field is renamed to script.

      • The tagFields field is renamed to fieldsToTag.

    • The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • sourceCode field, used to updating the parser script, is changed to the script field, which takes a UpdateParserScriptInput object. This updates the parser script and the language version together.

      • tagFields field is renamed to fieldsToTag.

      • The languageVersion is located inside the UpdateParserScriptInput object, and is no longer an enum, but a LanguageVersionInputType instead.

      • The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    On the Parser type:

    • testData field is deprecated and replaced by testCases.

    • sourceCode field is deprecated and replaced by script.

    • tagFields field is deprecated and replaced by fieldsToTag.

    For more information, see Parser , DeleteParserInput , LanguageVersionInputType , createParserV2() , testParserV2() , updateParserV2() .

New features and improvements

  • UI Changes

    • Time zone data has been updated to IANA 2024a and has been trimmed to +/- 5 years from the release date of IANA 2024a.

  • Automation and Alerts

    • Scheduled Reports can now be created. Scheduled Reports generate reports directly from dashboards and send them to the selected email addresses on a regular schedule.

      For more information, see Scheduled PDF Reports.

  • Dashboards and Widgets

    • A parameter panel widget type has been added to allow users to drag parameters from the top panel and into these panels. Also, a parameter width is now adjustable in the settings.

      For more information, see Parameter Panel Widget.

  • Log Collector

    • Fleet Management now supports ephemeral hosts. If a collector is enrolled with the parameter --ephemeralTimeout, after being offline for the specified duration in hours it will disappear from the Fleet Overview interface and be unenrolled. The feature requires LogScale Collector version 1.7.0 or above.

    • Live and Historic options for Fleet Overview are introduced. When Live, the overview will show online collectors and continuously be updated with e.g. new CPU metrics or status changes. The Historic view will display all records of collectors for the last 30 days. In this case the overview will not be updated with new information.

      For more information, see Switching between Live and Historic overview.

Fixed in this release

  • Functions

    • The query editor has been fixed as field auto-completions would sometimes not be suggested.

    • The query editor would mark the entire query as erroneous when count() was given with distinct=true parameter but missing an argument for the field parameter. This issue has been fixed.

Falcon LogScale 1.136.2 LTS (2024-06-12)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.136.2LTS2024-06-12

Cloud

2025-05-31No1.112No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.136.2/server-1.136.2.tar.gz

These notes include entries from the following previous releases: 1.136.1

Bug fixes and updates.

Important

Due to a known memory issue in this release, customers are advised to upgrade to 1.137.0 or later.

Breaking Changes

The following items create a breaking change in the behavior, response or operation of this release.

  • Functions

    • The limit parameter has been added to the rdns() function. It is controlled by dynamic configurations RdnsMaxLimit and RdnsDefaultLimit. This is a breaking change addition due to incidents caused by the large implicit limit used before.

      For more information, see rdns().

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The -XX:ActiveProcessorCount=n command-line option will be ignored if set. Users that need to configure the core count manually should set CORES=n environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.

      This change is scheduled for 1.148.0.

      For more information, see Configuring Available CPU Cores.

Removed

Items that have been removed as of this release.

Storage

  • The full JDK has been removed from the Docker images, leaving only the bundled JDK that is part of LogScale release tarballs.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The any argument to the type parameter of sort() and table() has been deprecated and will be removed in version 1.142.

    Warnings prompts will be shown in queries that fall into either of these two cases:

    • If you are explicitly supplying an any argument, please either simply remove both the parameter and the argument, for example change sort(..., type=any) to sort(...) or supply the argument for type that corresponds to your data.

    • If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of type parameter to hex e.g. sort(..., type=hex).

    In all other cases, no action is needed.

    The new default value for sort() and table() will be number. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument for type.

  • The following API endpoints are deprecated and marked for removal in 1.148.0:

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment

    • GET /api/v1/clusterconfig/kafka-queues/partition-assignment

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults

    The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • The HUMIO_JVM_ARGS environment variable in the LogScale Launcher Script script will be removed in 1.154.0.

    The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.

  • The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.

    • The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.

      • force field is renamed to allowOverwritingExistingParser.

      • sourceCode field is renamed to script.

      • tagFields field is renamed to fieldsToTag.

      • languageVersion is no longer an enum, but a LanguageVersionInputType instead.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    • The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:

      • The mutation returns boolean to represent success or failure, instead of a Parser wrapped in an object.

    • The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:

      • The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • The new test cases can contain assertions about the contents of the output.

      • The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.

      • The mutation now accepts both a language version and list of fields to be removed before parsing.

      • The parserScript field is renamed to script.

      • The tagFields field is renamed to fieldsToTag.

    • The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • sourceCode field, used to updating the parser script, is changed to the script field, which takes a UpdateParserScriptInput object. This updates the parser script and the language version together.

      • tagFields field is renamed to fieldsToTag.

      • The languageVersion is located inside the UpdateParserScriptInput object, and is no longer an enum, but a LanguageVersionInputType instead.

      • The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    On the Parser type:

    • testData field is deprecated and replaced by testCases.

    • sourceCode field is deprecated and replaced by script.

    • tagFields field is deprecated and replaced by fieldsToTag.

    For more information, see Parser , DeleteParserInput , LanguageVersionInputType , createParserV2() , testParserV2() , updateParserV2() .

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Queries

    • Hitting the query count quota no longer cancels existing queries, but only disallows starting new ones.

      For more information, see Query Count.

Upgrades

Changes that may occur or be required during an upgrade.

  • Storage

    • Docker images have been upgraded to Java 22.

    • Added new deployment artifacts. The published tarballs (e.g. server.tar.gz) are now available with a bundled JDK. The platforms currently supported are linux_x64 for 64-bit Linux, and alpine_x64 for 64-bit Alpine Linux and other musl-based Linux distributions. The Docker images have been updated to use this bundled JDK internally. We encourage users to migrate to using the tarballs with bundled JDKs.

New features and improvements

  • Installation and Deployment

    • The LogScale Launcher Script now sets -XX:+UseTransparentHugePages as part of the mandatory flags. THP is already enabled for all processes on many Linux distributions by default. This flag enables THP on systems where processes must opt into THP via madvise. We strongly recommend enabling THP for LogScale.

  • UI Changes

    • Time zone data has been updated to IANA 2024a and has been trimmed to +/- 5 years from the release date of IANA 2024a.

    • The query editor now shows completions for known field values that have previously been observed in results. For instance, #repo = m may show completions for repositories starting with m seen in previous results.

    • Sign up to LogScale Community Edition is no longer available for new users. Links, pages and UI flows to access it have been removed.

    • The number of events in the current window has been added to Metric Types as window_count.

  • Automation and Alerts

  • GraphQL API

  • Storage

    • The bucket transfer prioritization has been adjusted. When behind on both uploads and downloads, 75% of the S3_STORAGE_CONCURRENCY capacity is reserved for uploads, and 25% for downloads, rather than using all slots for downloads.

    • We reverted a change introduced in 1.131.0 intended to cause fewer mini-segments to move in the cluster when digest reassignment occurs. The change could cause mini-segments to not be balanced across cluster nodes in the expected way.

  • Configuration

    • The following configuration parameters have been introduced:

    • The amount of global meta data required for retention spans of over 30 days has been reduced. The amount of global meta data required in clusters with high number of active datasources has also been reduced, as well as the global size of mini segments, by combining them into larger mini segments.

      Pre-merging mini segments now reduces the number of segment files on disk (and in bucket) and reduces the amount of meta data for segment targets in progress. This allows getting larger target segment files and reduces the amount of "undersized" merging of "completed" segments. It also allows a smaller flush interval for mini segments without incurring in a larger number of mini segments.

      This feature is only supported from v1.112.0. To safely enable it by default, we are now raising to v1.112.0 the minimum version to upgrade from, to disallow rollback to versions older than this version.

      The feature is on by default. It can be disabled using the feature flag PreMergeMiniSegments. Disabling the feature stops future merges of mini segments into larger mini segment files, but does not alter the defaults below, nor modify how already merged mini-segments behave.

      For more information, see Global Database, Ingestion: Digest Phase.

    • The default values for the following configuration parameters have changed:

      • FLUSH_BLOCK_SECONDS = 900 (was 1,800)

      • MAX_HOURS_SEGMENT_OPEN = 720 (was 24, maximum is now 24,000)

  • Dashboards and Widgets

    • The automatic rendering of URLs as links has been disabled for the Table widget. Only URLs appearing in queries with the markdown style e.g. [CrowdStrike](https://crowdstrike.com) will be automatically rendered as links in the Table widget columns. Content, including plain URLs e.g. https://crowdstrike.com, can still be rendered as links, but this should now be explicitly configured using the Show asLink widget property.

      For more information, see Table Widget Properties.

    • Dashboard parameters have gotten the following updates:

      • The name of the parameter is on top of the input field, so more space is available for both parts.

      • A Clear all button has been added to multi-value parameters so that all values can be removed in one click.

      • The parameter configuration form has been moved to the side panel.

      • Multiple values can be added at once to a multi-value parameter by inputting a comma separated list of values, which can be used as individual values.

      For more information, see Multi-value Parameters.

  • Ingestion

    • Ingest feed scheduling has been changed to be more gradual in ramping up concurrency and will also reduce concurrency in response to failures. This will make high-pressure failing ingest feeds fall back to periodic retries instead of constantly retrying.

      For more information, see Ingest Data from AWS S3.

    • Parser test cases can now include assertions. This allows you to specify that you expect certain fields to have certain values in a test case after parsing, or that you expect certain fields to not be present at all. Note that the assertions are not exported as part of the YAML template yet.

      For more information, see Writing a Parser.

  • Log Collector

  • Queries

    • Queries are now allowed to be queued for start by the query coordinator for a maximum of 10 minutes.

      For more information, see Query Coordination.

  • Functions

    • The optional limit parameter has been added to the readFile() function to limit the number of rows of the file returned.

    • The geography:distance() function is now generally available. The default value for the as parameter has been changed to _distance.

      For more information, see geography:distance().

    • onDuplicate parameter has been added to kvParse() to specify how to handle duplicate fields.

    • For Cloud customers: the maximum value of the limit parameter for tail() and head() functions has been increased to 20,000.

    • For Self-Hosted solutions: the maximum value of the limit parameter for tail() and head() functions has been aligned with the StateRowLimit dynamic configuration. This means that the upper value of limit is now adjustable for these two functions.

    • The readFile() function will show a warning when the results are truncated due to reaching global result row limit. This behaviour was previously silent.

  • Other

    • New metrics ingest-queue-write-offset and ingest-queue-read-offset have been added, reporting the Kafka offsets of the most recently written and read events on the ingest queue.

    • The ConfigLoggerJob now also logs digestReplicationFactor, segmentReplicationFactor, minHostAlivePercentageToEnableClusterRebalancing, allowUpdateDesiredDigesters and allowRebalanceExistingSegments.

    • New metric events-parsed has been added, serving as an indicator for how many input events a parser has been applied to.

Fixed in this release

  • Security

    • Various OIDC caching issues have been fixed including ensuring refresh of the JWKS cache once per hour by default.

  • UI Changes

    • The formatting of @timestamp has been improved to make time-based visualizations fully compatible with time zones when selecting time zones other than the browser default.

    • The error Failed to fetch data for aliased fields would sometimes appear on the Search page of the sandbox repository. This issue has been fixed.

    • Data statistics in the Organizations overview page could not be populated in some cases.

    • Fixed an issue that prevented users from copying the query string from the flyout in the Recent / Saved queries panel.

    • Still existing Humio occurrences have been replaced with LogScale in a lot of places, primarily in GraphQL documentation and error messages.

  • Storage

    • redactEvents segment rewriting has been fixed for several issues that could cause either failure to complete the rewrite, or events to be missed in rare cases. Users should be aware that redaction jobs that were submitted prior to upgrading to a fixed version may fail to complete correctly, or may miss events. Therefore, you are encouraged to resubmit redactions you have recently submitted, to ensure the events are actually gone.

    • Pending merges of segments would contend with the verification of segments being transferred between nodes/bucket. This resulted in spuriously long transfer times, due to queueing of the verification step for the segment file. This issue has now been fixed.

  • Dashboards and Widgets

    • A visualization issue has been fixed as the dropdown menu for saving a dashboard widget was showing a wrong title in dashboards not belonging to a package.

    • Parameters appearing between a string containing \\ and any other string would not be correctly detected. This issue has been fixed.

    • Other options than exporting to CSV file were not possible on the Dashboard page for a widget and on the Search page for a query result. This issue is now fixed.

  • Queries

    • Multiple clients might trigger concurrent computation of the result step for a shared query. This issue has been fixed: now only one pending computation is allowed at a time.

  • Functions

    • The error message when providing a non-existing query function in an anonymous query e.g. bucket(function=[{_noFunction()}]) has been fixed.

    • The table() function has been fixed as it would wrongly accept a limit of 0, causing serialisation to break between cluster nodes.

  • Other

    • A regression introduced in version 1.132 has been fixed, where a file name starting with shared/ would be recognized as a shared file instead of a regular file. However, a shared file should be referred to using exactly /shared/ as a prefix.

    • DNS lookup was blocked by heavy disk IO when using a HTTP proxy, causing timeouts. This issue has been fixed.

  • Packages

    • Uploading a package zip would fail on Windows devices. This issue has been fixed.

Known Issues

  • Other

    • An issue has been identified where a memory leak could cause a node to exhaust the available memory. Customers are advised to upgrade to 1.137.0 or higher.

Improvement

  • Installation and Deployment

    • An error log is displayed if the latency on global-events exceeds 150 seconds, to prevent nodes from crashing.

  • Storage

    • Removed some work from the thread scheduling bucket transfers that could be slightly expensive in cases where the cluster had fallen behind on uploads.

  • Configuration

    • Whenever a SAML or OIDC IdP is created or updated, any leading or trailing whitespace will be trimmed from its fields. This is to avoid configuration errors.

Falcon LogScale 1.136.1 LTS (2024-05-29)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.136.1LTS2024-05-29

Cloud

2025-05-31No1.112No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.136.1/server-1.136.1.tar.gz

Bug fixes and updates.

Breaking Changes

The following items create a breaking change in the behavior, response or operation of this release.

  • Functions

    • The limit parameter has been added to the rdns() function. It is controlled by dynamic configurations RdnsMaxLimit and RdnsDefaultLimit. This is a breaking change addition due to incidents caused by the large implicit limit used before.

      For more information, see rdns().

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The -XX:ActiveProcessorCount=n command-line option will be ignored if set. Users that need to configure the core count manually should set CORES=n environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.

      This change is scheduled for 1.148.0.

      For more information, see Configuring Available CPU Cores.

Removed

Items that have been removed as of this release.

Storage

  • The full JDK has been removed from the Docker images, leaving only the bundled JDK that is part of LogScale release tarballs.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The any argument to the type parameter of sort() and table() has been deprecated and will be removed in version 1.142.

    Warnings prompts will be shown in queries that fall into either of these two cases:

    • If you are explicitly supplying an any argument, please either simply remove both the parameter and the argument, for example change sort(..., type=any) to sort(...) or supply the argument for type that corresponds to your data.

    • If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of type parameter to hex e.g. sort(..., type=hex).

    In all other cases, no action is needed.

    The new default value for sort() and table() will be number. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument for type.

  • The following API endpoints are deprecated and marked for removal in 1.148.0:

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment

    • GET /api/v1/clusterconfig/kafka-queues/partition-assignment

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults

    The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • The HUMIO_JVM_ARGS environment variable in the LogScale Launcher Script script will be removed in 1.154.0.

    The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.

  • The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.

    • The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.

      • force field is renamed to allowOverwritingExistingParser.

      • sourceCode field is renamed to script.

      • tagFields field is renamed to fieldsToTag.

      • languageVersion is no longer an enum, but a LanguageVersionInputType instead.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    • The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:

      • The mutation returns boolean to represent success or failure, instead of a Parser wrapped in an object.

    • The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:

      • The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • The new test cases can contain assertions about the contents of the output.

      • The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.

      • The mutation now accepts both a language version and list of fields to be removed before parsing.

      • The parserScript field is renamed to script.

      • The tagFields field is renamed to fieldsToTag.

    • The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • sourceCode field, used to updating the parser script, is changed to the script field, which takes a UpdateParserScriptInput object. This updates the parser script and the language version together.

      • tagFields field is renamed to fieldsToTag.

      • The languageVersion is located inside the UpdateParserScriptInput object, and is no longer an enum, but a LanguageVersionInputType instead.

      • The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    On the Parser type:

    • testData field is deprecated and replaced by testCases.

    • sourceCode field is deprecated and replaced by script.

    • tagFields field is deprecated and replaced by fieldsToTag.

    For more information, see Parser , DeleteParserInput , LanguageVersionInputType , createParserV2() , testParserV2() , updateParserV2() .

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Queries

    • Hitting the query count quota no longer cancels existing queries, but only disallows starting new ones.

      For more information, see Query Count.

Upgrades

Changes that may occur or be required during an upgrade.

  • Storage

    • Docker images have been upgraded to Java 22.

    • Added new deployment artifacts. The published tarballs (e.g. server.tar.gz) are now available with a bundled JDK. The platforms currently supported are linux_x64 for 64-bit Linux, and alpine_x64 for 64-bit Alpine Linux and other musl-based Linux distributions. The Docker images have been updated to use this bundled JDK internally. We encourage users to migrate to using the tarballs with bundled JDKs.

New features and improvements

  • Installation and Deployment

    • The LogScale Launcher Script now sets -XX:+UseTransparentHugePages as part of the mandatory flags. THP is already enabled for all processes on many Linux distributions by default. This flag enables THP on systems where processes must opt into THP via madvise. We strongly recommend enabling THP for LogScale.

  • UI Changes

    • Time zone data has been updated to IANA 2024a and has been trimmed to +/- 5 years from the release date of IANA 2024a.

    • The query editor now shows completions for known field values that have previously been observed in results. For instance, #repo = m may show completions for repositories starting with m seen in previous results.

    • Sign up to LogScale Community Edition is no longer available for new users. Links, pages and UI flows to access it have been removed.

    • The number of events in the current window has been added to Metric Types as window_count.

  • Automation and Alerts

  • GraphQL API

  • Storage

    • The bucket transfer prioritization has been adjusted. When behind on both uploads and downloads, 75% of the S3_STORAGE_CONCURRENCY capacity is reserved for uploads, and 25% for downloads, rather than using all slots for downloads.

    • We reverted a change introduced in 1.131.0 intended to cause fewer mini-segments to move in the cluster when digest reassignment occurs. The change could cause mini-segments to not be balanced across cluster nodes in the expected way.

  • Configuration

    • The following configuration parameters have been introduced:

    • The amount of global meta data required for retention spans of over 30 days has been reduced. The amount of global meta data required in clusters with high number of active datasources has also been reduced, as well as the global size of mini segments, by combining them into larger mini segments.

      Pre-merging mini segments now reduces the number of segment files on disk (and in bucket) and reduces the amount of meta data for segment targets in progress. This allows getting larger target segment files and reduces the amount of "undersized" merging of "completed" segments. It also allows a smaller flush interval for mini segments without incurring in a larger number of mini segments.

      This feature is only supported from v1.112.0. To safely enable it by default, we are now raising to v1.112.0 the minimum version to upgrade from, to disallow rollback to versions older than this version.

      The feature is on by default. It can be disabled using the feature flag PreMergeMiniSegments. Disabling the feature stops future merges of mini segments into larger mini segment files, but does not alter the defaults below, nor modify how already merged mini-segments behave.

      For more information, see Global Database, Ingestion: Digest Phase.

    • The default values for the following configuration parameters have changed:

      • FLUSH_BLOCK_SECONDS = 900 (was 1,800)

      • MAX_HOURS_SEGMENT_OPEN = 720 (was 24, maximum is now 24,000)

  • Dashboards and Widgets

    • The automatic rendering of URLs as links has been disabled for the Table widget. Only URLs appearing in queries with the markdown style e.g. [CrowdStrike](https://crowdstrike.com) will be automatically rendered as links in the Table widget columns. Content, including plain URLs e.g. https://crowdstrike.com, can still be rendered as links, but this should now be explicitly configured using the Show asLink widget property.

      For more information, see Table Widget Properties.

    • Dashboard parameters have gotten the following updates:

      • The name of the parameter is on top of the input field, so more space is available for both parts.

      • A Clear all button has been added to multi-value parameters so that all values can be removed in one click.

      • The parameter configuration form has been moved to the side panel.

      • Multiple values can be added at once to a multi-value parameter by inputting a comma separated list of values, which can be used as individual values.

      For more information, see Multi-value Parameters.

  • Ingestion

    • Ingest feed scheduling has been changed to be more gradual in ramping up concurrency and will also reduce concurrency in response to failures. This will make high-pressure failing ingest feeds fall back to periodic retries instead of constantly retrying.

      For more information, see Ingest Data from AWS S3.

    • Parser test cases can now include assertions. This allows you to specify that you expect certain fields to have certain values in a test case after parsing, or that you expect certain fields to not be present at all. Note that the assertions are not exported as part of the YAML template yet.

      For more information, see Writing a Parser.

  • Log Collector

  • Queries

    • Queries are now allowed to be queued for start by the query coordinator for a maximum of 10 minutes.

      For more information, see Query Coordination.

  • Functions

    • The optional limit parameter has been added to the readFile() function to limit the number of rows of the file returned.

    • The geography:distance() function is now generally available. The default value for the as parameter has been changed to _distance.

      For more information, see geography:distance().

    • onDuplicate parameter has been added to kvParse() to specify how to handle duplicate fields.

    • For Cloud customers: the maximum value of the limit parameter for tail() and head() functions has been increased to 20,000.

    • For Self-Hosted solutions: the maximum value of the limit parameter for tail() and head() functions has been aligned with the StateRowLimit dynamic configuration. This means that the upper value of limit is now adjustable for these two functions.

    • The readFile() function will show a warning when the results are truncated due to reaching global result row limit. This behaviour was previously silent.

  • Other

    • New metrics ingest-queue-write-offset and ingest-queue-read-offset have been added, reporting the Kafka offsets of the most recently written and read events on the ingest queue.

    • The ConfigLoggerJob now also logs digestReplicationFactor, segmentReplicationFactor, minHostAlivePercentageToEnableClusterRebalancing, allowUpdateDesiredDigesters and allowRebalanceExistingSegments.

    • New metric events-parsed has been added, serving as an indicator for how many input events a parser has been applied to.

Fixed in this release

  • Security

    • Various OIDC caching issues have been fixed including ensuring refresh of the JWKS cache once per hour by default.

  • UI Changes

    • The formatting of @timestamp has been improved to make time-based visualizations fully compatible with time zones when selecting time zones other than the browser default.

    • The error Failed to fetch data for aliased fields would sometimes appear on the Search page of the sandbox repository. This issue has been fixed.

    • Data statistics in the Organizations overview page could not be populated in some cases.

    • Fixed an issue that prevented users from copying the query string from the flyout in the Recent / Saved queries panel.

    • Still existing Humio occurrences have been replaced with LogScale in a lot of places, primarily in GraphQL documentation and error messages.

  • Storage

    • redactEvents segment rewriting has been fixed for several issues that could cause either failure to complete the rewrite, or events to be missed in rare cases. Users should be aware that redaction jobs that were submitted prior to upgrading to a fixed version may fail to complete correctly, or may miss events. Therefore, you are encouraged to resubmit redactions you have recently submitted, to ensure the events are actually gone.

  • Dashboards and Widgets

    • A visualization issue has been fixed as the dropdown menu for saving a dashboard widget was showing a wrong title in dashboards not belonging to a package.

    • Parameters appearing between a string containing \\ and any other string would not be correctly detected. This issue has been fixed.

    • Other options than exporting to CSV file were not possible on the Dashboard page for a widget and on the Search page for a query result. This issue is now fixed.

  • Queries

    • Multiple clients might trigger concurrent computation of the result step for a shared query. This issue has been fixed: now only one pending computation is allowed at a time.

  • Functions

    • The error message when providing a non-existing query function in an anonymous query e.g. bucket(function=[{_noFunction()}]) has been fixed.

    • The table() function has been fixed as it would wrongly accept a limit of 0, causing serialisation to break between cluster nodes.

  • Other

    • DNS lookup was blocked by heavy disk IO when using a HTTP proxy, causing timeouts. This issue has been fixed.

  • Packages

    • Uploading a package zip would fail on Windows devices. This issue has been fixed.

Improvement

  • Installation and Deployment

    • An error log is displayed if the latency on global-events exceeds 150 seconds, to prevent nodes from crashing.

  • Storage

    • Removed some work from the thread scheduling bucket transfers that could be slightly expensive in cases where the cluster had fallen behind on uploads.

  • Configuration

    • Whenever a SAML or OIDC IdP is created or updated, any leading or trailing whitespace will be trimmed from its fields. This is to avoid configuration errors.

Falcon LogScale 1.136.0 GA (2024-04-30)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.136.0GA2024-04-30

Cloud

2025-05-31No1.112No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The -XX:ActiveProcessorCount=n command-line option will be ignored if set. Users that need to configure the core count manually should set CORES=n environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.

      This change is scheduled for 1.148.0.

      For more information, see Configuring Available CPU Cores.

Removed

Items that have been removed as of this release.

Storage

  • The full JDK has been removed from the Docker images, leaving only the bundled JDK that is part of LogScale release tarballs.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The any argument to the type parameter of sort() and table() has been deprecated and will be removed in version 1.142.

    Warnings prompts will be shown in queries that fall into either of these two cases:

    • If you are explicitly supplying an any argument, please either simply remove both the parameter and the argument, for example change sort(..., type=any) to sort(...) or supply the argument for type that corresponds to your data.

    • If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of type parameter to hex e.g. sort(..., type=hex).

    In all other cases, no action is needed.

    The new default value for sort() and table() will be number. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument for type.

  • The following API endpoints are deprecated and marked for removal in 1.148.0:

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment

    • GET /api/v1/clusterconfig/kafka-queues/partition-assignment

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults

    The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • The HUMIO_JVM_ARGS environment variable in the LogScale Launcher Script script will be removed in 1.154.0.

    The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.

  • The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.

    • The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.

      • force field is renamed to allowOverwritingExistingParser.

      • sourceCode field is renamed to script.

      • tagFields field is renamed to fieldsToTag.

      • languageVersion is no longer an enum, but a LanguageVersionInputType instead.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    • The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:

      • The mutation returns boolean to represent success or failure, instead of a Parser wrapped in an object.

    • The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:

      • The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • The new test cases can contain assertions about the contents of the output.

      • The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.

      • The mutation now accepts both a language version and list of fields to be removed before parsing.

      • The parserScript field is renamed to script.

      • The tagFields field is renamed to fieldsToTag.

    • The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • sourceCode field, used to updating the parser script, is changed to the script field, which takes a UpdateParserScriptInput object. This updates the parser script and the language version together.

      • tagFields field is renamed to fieldsToTag.

      • The languageVersion is located inside the UpdateParserScriptInput object, and is no longer an enum, but a LanguageVersionInputType instead.

      • The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    On the Parser type:

    • testData field is deprecated and replaced by testCases.

    • sourceCode field is deprecated and replaced by script.

    • tagFields field is deprecated and replaced by fieldsToTag.

    For more information, see Parser , DeleteParserInput , LanguageVersionInputType , createParserV2() , testParserV2() , updateParserV2() .

New features and improvements

  • GraphQL API

  • Ingestion

    • Parser test cases can now include assertions. This allows you to specify that you expect certain fields to have certain values in a test case after parsing, or that you expect certain fields to not be present at all. Note that the assertions are not exported as part of the YAML template yet.

      For more information, see Writing a Parser.

  • Log Collector

Fixed in this release

  • UI Changes

    • Still existing Humio occurrences have been replaced with LogScale in a lot of places, primarily in GraphQL documentation and error messages.

  • Functions

    • The table() function has been fixed as it would wrongly accept a limit of 0, causing serialisation to break between cluster nodes.

  • Other

    • DNS lookup was blocked by heavy disk IO when using a HTTP proxy, causing timeouts. This issue has been fixed.

Falcon LogScale 1.135.0 GA (2024-04-23)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.135.0GA2024-04-23

Cloud

2025-05-31No1.112No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The -XX:ActiveProcessorCount=n command-line option will be ignored if set. Users that need to configure the core count manually should set CORES=n environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.

      This change is scheduled for 1.148.0.

      For more information, see Configuring Available CPU Cores.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The assetType GraphQL field on Alert, Dashboard, Parser, SavedQuery and ViewInteraction datatypes has been deprecated and will be removed in version 1.136 of LogScale.

  • The any argument to the type parameter of sort() and table() has been deprecated and will be removed in version 1.142.

    Warnings prompts will be shown in queries that fall into either of these two cases:

    • If you are explicitly supplying an any argument, please either simply remove both the parameter and the argument, for example change sort(..., type=any) to sort(...) or supply the argument for type that corresponds to your data.

    • If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of type parameter to hex e.g. sort(..., type=hex).

    In all other cases, no action is needed.

    The new default value for sort() and table() will be number. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument for type.

  • The following API endpoints are deprecated and marked for removal in 1.148.0:

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment

    • GET /api/v1/clusterconfig/kafka-queues/partition-assignment

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults

    The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.

  • In the GraphQL API, the ChangeTriggersAndAction enum value for both the Permission and ViewAction enum is now deprecated and will be removed in version 1.136 of LogScale.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • The HUMIO_JVM_ARGS environment variable in the LogScale Launcher Script script will be removed in 1.154.0.

    The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.

  • The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.

    • The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.

      • force field is renamed to allowOverwritingExistingParser.

      • sourceCode field is renamed to script.

      • tagFields field is renamed to fieldsToTag.

      • languageVersion is no longer an enum, but a LanguageVersionInputType instead.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    • The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:

      • The mutation returns boolean to represent success or failure, instead of a Parser wrapped in an object.

    • The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:

      • The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • The new test cases can contain assertions about the contents of the output.

      • The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.

      • The mutation now accepts both a language version and list of fields to be removed before parsing.

      • The parserScript field is renamed to script.

      • The tagFields field is renamed to fieldsToTag.

    • The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • sourceCode field, used to updating the parser script, is changed to the script field, which takes a UpdateParserScriptInput object. This updates the parser script and the language version together.

      • tagFields field is renamed to fieldsToTag.

      • The languageVersion is located inside the UpdateParserScriptInput object, and is no longer an enum, but a LanguageVersionInputType instead.

      • The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    On the Parser type:

    • testData field is deprecated and replaced by testCases.

    • sourceCode field is deprecated and replaced by script.

    • tagFields field is deprecated and replaced by fieldsToTag.

    For more information, see Parser , DeleteParserInput , LanguageVersionInputType , createParserV2() , testParserV2() , updateParserV2() .

  • In the GraphQL API, the name argument to the parser field on the Repository datatype has been deprecated and will be removed in version 1.136 of LogScale.

Upgrades

Changes that may occur or be required during an upgrade.

  • Storage

    • Docker images have been upgraded to Java 22.

    • Added new deployment artifacts. The published tarballs (e.g. server.tar.gz) are now available with a bundled JDK. The platforms currently supported are linux_x64 for 64-bit Linux, and alpine_x64 for 64-bit Alpine Linux and other musl-based Linux distributions. The Docker images have been updated to use this bundled JDK internally. We encourage users to migrate to using the tarballs with bundled JDKs.

New features and improvements

  • UI Changes

    • The query editor now shows completions for known field values that have previously been observed in results. For instance, #repo = m may show completions for repositories starting with m seen in previous results.

  • Automation and Alerts

  • Storage

    • We reverted a change introduced in 1.131.0 intended to cause fewer mini-segments to move in the cluster when digest reassignment occurs. The change could cause mini-segments to not be balanced across cluster nodes in the expected way.

  • Configuration

    • The following configuration parameters have been introduced:

    • The amount of global meta data required for retention spans of over 30 days has been reduced. The amount of global meta data required in clusters with high number of active datasources has also been reduced, as well as the global size of mini segments, by combining them into larger mini segments.

      Pre-merging mini segments now reduces the number of segment files on disk (and in bucket) and reduces the amount of meta data for segment targets in progress. This allows getting larger target segment files and reduces the amount of "undersized" merging of "completed" segments. It also allows a smaller flush interval for mini segments without incurring in a larger number of mini segments.

      This feature is only supported from v1.112.0. To safely enable it by default, we are now raising to v1.112.0 the minimum version to upgrade from, to disallow rollback to versions older than this version.

      The feature is on by default. It can be disabled using the feature flag PreMergeMiniSegments. Disabling the feature stops future merges of mini segments into larger mini segment files, but does not alter the defaults below, nor modify how already merged mini-segments behave.

      For more information, see Global Database, Ingestion: Digest Phase.

    • The default values for the following configuration parameters have changed:

      • FLUSH_BLOCK_SECONDS = 900 (was 1,800)

      • MAX_HOURS_SEGMENT_OPEN = 720 (was 24, maximum is now 24,000)

  • Dashboards and Widgets

    • The automatic rendering of URLs as links has been disabled for the Table widget. Only URLs appearing in queries with the markdown style e.g. [CrowdStrike](https://crowdstrike.com) will be automatically rendered as links in the Table widget columns. Content, including plain URLs e.g. https://crowdstrike.com, can still be rendered as links, but this should now be explicitly configured using the Show asLink widget property.

      For more information, see Table Widget Properties.

    • Dashboard parameters have gotten the following updates:

      • The name of the parameter is on top of the input field, so more space is available for both parts.

      • A Clear all button has been added to multi-value parameters so that all values can be removed in one click.

      • The parameter configuration form has been moved to the side panel.

      • Multiple values can be added at once to a multi-value parameter by inputting a comma separated list of values, which can be used as individual values.

      For more information, see Multi-value Parameters.

  • Functions

Fixed in this release

  • Dashboards and Widgets

    • Other options than exporting to CSV file were not possible on the Dashboard page for a widget and on the Search page for a query result. This issue is now fixed.

  • Functions

    • The error message when providing a non-existing query function in an anonymous query e.g. bucket(function=[{_noFunction()}]) has been fixed.

Falcon LogScale 1.134.0 GA (2024-04-16)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.134.0GA2024-04-16

Cloud

2025-05-31No1.106No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The -XX:ActiveProcessorCount=n command-line option will be ignored if set. Users that need to configure the core count manually should set CORES=n environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.

      This change is scheduled for 1.148.0.

      For more information, see Configuring Available CPU Cores.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The assetType GraphQL field on Alert, Dashboard, Parser, SavedQuery and ViewInteraction datatypes has been deprecated and will be removed in version 1.136 of LogScale.

  • The any argument to the type parameter of sort() and table() has been deprecated and will be removed in version 1.142.

    Warnings prompts will be shown in queries that fall into either of these two cases:

    • If you are explicitly supplying an any argument, please either simply remove both the parameter and the argument, for example change sort(..., type=any) to sort(...) or supply the argument for type that corresponds to your data.

    • If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of type parameter to hex e.g. sort(..., type=hex).

    In all other cases, no action is needed.

    The new default value for sort() and table() will be number. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument for type.

  • The following API endpoints are deprecated and marked for removal in 1.148.0:

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment

    • GET /api/v1/clusterconfig/kafka-queues/partition-assignment

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults

    The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.

  • In the GraphQL API, the ChangeTriggersAndAction enum value for both the Permission and ViewAction enum is now deprecated and will be removed in version 1.136 of LogScale.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • The HUMIO_JVM_ARGS environment variable in the LogScale Launcher Script script will be removed in 1.154.0.

    The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.

  • The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.

    • The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.

      • force field is renamed to allowOverwritingExistingParser.

      • sourceCode field is renamed to script.

      • tagFields field is renamed to fieldsToTag.

      • languageVersion is no longer an enum, but a LanguageVersionInputType instead.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    • The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:

      • The mutation returns boolean to represent success or failure, instead of a Parser wrapped in an object.

    • The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:

      • The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • The new test cases can contain assertions about the contents of the output.

      • The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.

      • The mutation now accepts both a language version and list of fields to be removed before parsing.

      • The parserScript field is renamed to script.

      • The tagFields field is renamed to fieldsToTag.

    • The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • sourceCode field, used to updating the parser script, is changed to the script field, which takes a UpdateParserScriptInput object. This updates the parser script and the language version together.

      • tagFields field is renamed to fieldsToTag.

      • The languageVersion is located inside the UpdateParserScriptInput object, and is no longer an enum, but a LanguageVersionInputType instead.

      • The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    On the Parser type:

    • testData field is deprecated and replaced by testCases.

    • sourceCode field is deprecated and replaced by script.

    • tagFields field is deprecated and replaced by fieldsToTag.

    For more information, see Parser , DeleteParserInput , LanguageVersionInputType , createParserV2() , testParserV2() , updateParserV2() .

  • In the GraphQL API, the name argument to the parser field on the Repository datatype has been deprecated and will be removed in version 1.136 of LogScale.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Queries

    • Hitting the query count quota no longer cancels existing queries, but only disallows starting new ones.

      For more information, see Query Count.

New features and improvements

  • UI Changes

    • Sign up to LogScale Community Edition is no longer available for new users. Links, pages and UI flows to access it have been removed.

    • The number of events in the current window has been added to Metric Types as window_count.

  • Functions

    • The geography:distance() function is now generally available. The default value for the as parameter has been changed to _distance.

      For more information, see geography:distance().

    • The readFile() function will show a warning when the results are truncated due to reaching global result row limit. This behaviour was previously silent.

  • Other

    • The ConfigLoggerJob now also logs digestReplicationFactor, segmentReplicationFactor, minHostAlivePercentageToEnableClusterRebalancing, allowUpdateDesiredDigesters and allowRebalanceExistingSegments.

Fixed in this release

  • UI Changes

    • The formatting of @timestamp has been improved to make time-based visualizations fully compatible with time zones when selecting time zones other than the browser default.

    • Data statistics in the Organizations overview page could not be populated in some cases.

  • Dashboards and Widgets

    • A visualization issue has been fixed as the dropdown menu for saving a dashboard widget was showing a wrong title in dashboards not belonging to a package.

  • Queries

    • Multiple clients might trigger concurrent computation of the result step for a shared query. This issue has been fixed: now only one pending computation is allowed at a time.

Falcon LogScale 1.133.0 GA (2024-04-09)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.133.0GA2024-04-09

Cloud

2025-05-31No1.106No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The -XX:ActiveProcessorCount=n command-line option will be ignored if set. Users that need to configure the core count manually should set CORES=n environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.

      This change is scheduled for 1.148.0.

      For more information, see Configuring Available CPU Cores.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The assetType GraphQL field on Alert, Dashboard, Parser, SavedQuery and ViewInteraction datatypes has been deprecated and will be removed in version 1.136 of LogScale.

  • The any argument to the type parameter of sort() and table() has been deprecated and will be removed in version 1.142.

    Warnings prompts will be shown in queries that fall into either of these two cases:

    • If you are explicitly supplying an any argument, please either simply remove both the parameter and the argument, for example change sort(..., type=any) to sort(...) or supply the argument for type that corresponds to your data.

    • If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of type parameter to hex e.g. sort(..., type=hex).

    In all other cases, no action is needed.

    The new default value for sort() and table() will be number. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument for type.

  • The following API endpoints are deprecated and marked for removal in 1.148.0:

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment

    • GET /api/v1/clusterconfig/kafka-queues/partition-assignment

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults

    The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.

  • In the GraphQL API, the ChangeTriggersAndAction enum value for both the Permission and ViewAction enum is now deprecated and will be removed in version 1.136 of LogScale.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • The HUMIO_JVM_ARGS environment variable in the LogScale Launcher Script script will be removed in 1.154.0.

    The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.

  • The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.

    • The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.

      • force field is renamed to allowOverwritingExistingParser.

      • sourceCode field is renamed to script.

      • tagFields field is renamed to fieldsToTag.

      • languageVersion is no longer an enum, but a LanguageVersionInputType instead.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    • The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:

      • The mutation returns boolean to represent success or failure, instead of a Parser wrapped in an object.

    • The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:

      • The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • The new test cases can contain assertions about the contents of the output.

      • The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.

      • The mutation now accepts both a language version and list of fields to be removed before parsing.

      • The parserScript field is renamed to script.

      • The tagFields field is renamed to fieldsToTag.

    • The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • sourceCode field, used to updating the parser script, is changed to the script field, which takes a UpdateParserScriptInput object. This updates the parser script and the language version together.

      • tagFields field is renamed to fieldsToTag.

      • The languageVersion is located inside the UpdateParserScriptInput object, and is no longer an enum, but a LanguageVersionInputType instead.

      • The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    On the Parser type:

    • testData field is deprecated and replaced by testCases.

    • sourceCode field is deprecated and replaced by script.

    • tagFields field is deprecated and replaced by fieldsToTag.

    For more information, see Parser , DeleteParserInput , LanguageVersionInputType , createParserV2() , testParserV2() , updateParserV2() .

  • In the GraphQL API, the name argument to the parser field on the Repository datatype has been deprecated and will be removed in version 1.136 of LogScale.

New features and improvements

  • Installation and Deployment

    • The LogScale Launcher Script now sets -XX:+UseTransparentHugePages as part of the mandatory flags. THP is already enabled for all processes on many Linux distributions by default. This flag enables THP on systems where processes must opt into THP via madvise. We strongly recommend enabling THP for LogScale.

  • Automation and Alerts

  • Storage

    • The bucket transfer prioritization has been adjusted. When behind on both uploads and downloads, 75% of the S3_STORAGE_CONCURRENCY capacity is reserved for uploads, and 25% for downloads, rather than using all slots for downloads.

  • Queries

    • Queries are now allowed to be queued for start by the query coordinator for a maximum of 10 minutes.

      For more information, see Query Coordination.

  • Functions

    • The optional limit parameter has been added to the readFile() function to limit the number of rows of the file returned.

Fixed in this release

  • UI Changes

    • The error Failed to fetch data for aliased fields would sometimes appear on the Search page of the sandbox repository. This issue has been fixed.

    • Fixed an issue that prevented users from copying the query string from the flyout in the Recent / Saved queries panel.

  • Storage

    • redactEvents segment rewriting has been fixed for several issues that could cause either failure to complete the rewrite, or events to be missed in rare cases. Users should be aware that redaction jobs that were submitted prior to upgrading to a fixed version may fail to complete correctly, or may miss events. Therefore, you are encouraged to resubmit redactions you have recently submitted, to ensure the events are actually gone.

  • Dashboards and Widgets

    • Parameters appearing between a string containing \\ and any other string would not be correctly detected. This issue has been fixed.

  • Packages

    • Uploading a package zip would fail on Windows devices. This issue has been fixed.

Improvement

  • Storage

    • Removed some work from the thread scheduling bucket transfers that could be slightly expensive in cases where the cluster had fallen behind on uploads.

  • Configuration

    • Whenever a SAML or OIDC IdP is created or updated, any leading or trailing whitespace will be trimmed from its fields. This is to avoid configuration errors.

Falcon LogScale 1.132.0 GA (2024-04-02)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.132.0GA2024-04-02

Cloud

2025-05-31No1.106No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The -XX:ActiveProcessorCount=n command-line option will be ignored if set. Users that need to configure the core count manually should set CORES=n environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.

      This change is scheduled for 1.148.0.

      For more information, see Configuring Available CPU Cores.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The assetType GraphQL field on Alert, Dashboard, Parser, SavedQuery and ViewInteraction datatypes has been deprecated and will be removed in version 1.136 of LogScale.

  • The any argument to the type parameter of sort() and table() has been deprecated and will be removed in version 1.142.

    Warnings prompts will be shown in queries that fall into either of these two cases:

    • If you are explicitly supplying an any argument, please either simply remove both the parameter and the argument, for example change sort(..., type=any) to sort(...) or supply the argument for type that corresponds to your data.

    • If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of type parameter to hex e.g. sort(..., type=hex).

    In all other cases, no action is needed.

    The new default value for sort() and table() will be number. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument for type.

  • The following API endpoints are deprecated and marked for removal in 1.148.0:

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment

    • GET /api/v1/clusterconfig/kafka-queues/partition-assignment

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults

    The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.

  • In the GraphQL API, the ChangeTriggersAndAction enum value for both the Permission and ViewAction enum is now deprecated and will be removed in version 1.136 of LogScale.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • The HUMIO_JVM_ARGS environment variable in the LogScale Launcher Script script will be removed in 1.154.0.

    The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.

  • The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.

    • The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.

      • force field is renamed to allowOverwritingExistingParser.

      • sourceCode field is renamed to script.

      • tagFields field is renamed to fieldsToTag.

      • languageVersion is no longer an enum, but a LanguageVersionInputType instead.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    • The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:

      • The mutation returns boolean to represent success or failure, instead of a Parser wrapped in an object.

    • The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:

      • The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • The new test cases can contain assertions about the contents of the output.

      • The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.

      • The mutation now accepts both a language version and list of fields to be removed before parsing.

      • The parserScript field is renamed to script.

      • The tagFields field is renamed to fieldsToTag.

    • The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • sourceCode field, used to updating the parser script, is changed to the script field, which takes a UpdateParserScriptInput object. This updates the parser script and the language version together.

      • tagFields field is renamed to fieldsToTag.

      • The languageVersion is located inside the UpdateParserScriptInput object, and is no longer an enum, but a LanguageVersionInputType instead.

      • The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    On the Parser type:

    • testData field is deprecated and replaced by testCases.

    • sourceCode field is deprecated and replaced by script.

    • tagFields field is deprecated and replaced by fieldsToTag.

    For more information, see Parser , DeleteParserInput , LanguageVersionInputType , createParserV2() , testParserV2() , updateParserV2() .

  • In the GraphQL API, the name argument to the parser field on the Repository datatype has been deprecated and will be removed in version 1.136 of LogScale.

New features and improvements

  • Ingestion

    • Ingest feed scheduling has been changed to be more gradual in ramping up concurrency and will also reduce concurrency in response to failures. This will make high-pressure failing ingest feeds fall back to periodic retries instead of constantly retrying.

      For more information, see Ingest Data from AWS S3.

  • Functions

    • For Cloud customers: the maximum value of the limit parameter for tail() and head() functions has been increased to 20,000.

    • For Self-Hosted solutions: the maximum value of the limit parameter for tail() and head() functions has been aligned with the StateRowLimit dynamic configuration. This means that the upper value of limit is now adjustable for these two functions.

  • Other

    • New metrics ingest-queue-write-offset and ingest-queue-read-offset have been added, reporting the Kafka offsets of the most recently written and read events on the ingest queue.

    • New metric events-parsed has been added, serving as an indicator for how many input events a parser has been applied to.

Fixed in this release

  • Security

    • Various OIDC caching issues have been fixed including ensuring refresh of the JWKS cache once per hour by default.

Improvement

  • Installation and Deployment

    • An error log is displayed if the latency on global-events exceeds 150 seconds, to prevent nodes from crashing.

Falcon LogScale 1.131.3 LTS (2024-09-24)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.131.3LTS2024-09-24

Cloud

2025-04-30No1.106No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.131.3/server-1.131.3.tar.gz

These notes include entries from the following previous releases: 1.131.1, 1.131.2

Bug fixes and updates.

Removed

Items that have been removed as of this release.

GraphQL API

  • The enabledFeatures() query has been removed from GraphQL schema. Use featureFlags() query instead.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The assetType GraphQL field on Alert, Dashboard, Parser, SavedQuery and ViewInteraction datatypes has been deprecated and will be removed in version 1.136 of LogScale.

  • The any argument to the type parameter of sort() and table() has been deprecated and will be removed in version 1.142.

    Warnings prompts will be shown in queries that fall into either of these two cases:

    • If you are explicitly supplying an any argument, please either simply remove both the parameter and the argument, for example change sort(..., type=any) to sort(...) or supply the argument for type that corresponds to your data.

    • If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of type parameter to hex e.g. sort(..., type=hex).

    In all other cases, no action is needed.

    The new default value for sort() and table() will be number. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument for type.

  • In the GraphQL API, the ChangeTriggersAndAction enum value for both the Permission and ViewAction enum is now deprecated and will be removed in version 1.136 of LogScale.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.

    • The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.

      • force field is renamed to allowOverwritingExistingParser.

      • sourceCode field is renamed to script.

      • tagFields field is renamed to fieldsToTag.

      • languageVersion is no longer an enum, but a LanguageVersionInputType instead.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    • The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:

      • The mutation returns boolean to represent success or failure, instead of a Parser wrapped in an object.

    • The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:

      • The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • The new test cases can contain assertions about the contents of the output.

      • The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.

      • The mutation now accepts both a language version and list of fields to be removed before parsing.

      • The parserScript field is renamed to script.

      • The tagFields field is renamed to fieldsToTag.

    • The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • sourceCode field, used to updating the parser script, is changed to the script field, which takes a UpdateParserScriptInput object. This updates the parser script and the language version together.

      • tagFields field is renamed to fieldsToTag.

      • The languageVersion is located inside the UpdateParserScriptInput object, and is no longer an enum, but a LanguageVersionInputType instead.

      • The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    On the Parser type:

    • testData field is deprecated and replaced by testCases.

    • sourceCode field is deprecated and replaced by script.

    • tagFields field is deprecated and replaced by fieldsToTag.

    For more information, see Parser , DeleteParserInput , LanguageVersionInputType , createParserV2() , testParserV2() , updateParserV2() .

  • In the GraphQL API, the name argument to the parser field on the Repository datatype has been deprecated and will be removed in version 1.136 of LogScale.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Security

    • DNS caches are now invalidated after 60 seconds instead of never. To override this behavior, set the security policy networkaddress.cache.ttl in the security manager of the JRE (see Java Networking Properties).

  • Ingestion

    • It is no longer possible to delete a parser that is being used in an ingest feed. The parser must first be removed from the ingest feed.

      For more information, see Delete an Ingest Feed.

Upgrades

Changes that may occur or be required during an upgrade.

  • Installation and Deployment

    • The minimum required LogScale version to upgrade from has been raised to 1.106, in order to remove some workarounds for compatibility with old versions.

New features and improvements

  • Security

    • Added support for authorizing with an external JWT from an IdP setup in our cloud environment.

    • The audience for dynamic OIDC IdPs in our cloud environments are now logscale-$orgId, where $orgId is the ID of your organization.

    • Added support for Okta federated IdP OIDC extension to identity providers setup in cloud.

  • UI Changes

    • Time zone data has been updated to IANA 2024a and has been trimmed to +/- 5 years from the release date of IANA 2024a.

  • Automation and Alerts

    • Throttling and field-based throttling have been introduced as optional functionalities in Filter Alerts. The minimum throttling period is 1 minute.

    • The customizable trigger limit for Filter Alerts is removed. The trigger limit is now automatically determined based on the associated actions. If one or more email actions are associated, the trigger limit will be 15, otherwise, the trigger limit will be 100. Any existing customizable trigger limit of 1 will be treated as a throttling period of 1 minute, all other custom trigger limits will be ignored. This is a non-backwards compatible change to the GraphQL APIs for Filter Alerts, so any automation for these alerts must be updated.

  • GraphQL API

  • Configuration

    • The new dynamic configuration MaxOpenSegmentsOnWorker is implemented to control hard cap on open segment files for the scheduler. The scheduler should in most cases not reach this limit and it only acts as a backstop. Therefore, we recommend that administrators do not modify this setting unless advised to do so by CrowdStrike Support.

    • Authorization attempted via JWT tokens will now only try to grab user information from the user info endpoint if the scope in the access token contains any of the following: profile, email, openid. If no such scope is located in the token, LogScale will try to extract the username from the token and no other user details will be added. We will extract the scope claim based on the new environment variable OIDC_SCOPE_CLAIM, whose default is scope.

  • Ingestion

  • Queries

    • Queries are now allowed to be queued for start by the query coordinator for a maximum of 10 minutes.

      For more information, see Query Coordination.

  • Functions

    • The parseTimestamp() function is now able to parse timestamps with nanosecond precision.

    • The setField() query function is introduced. It takes two expressions, target and value and sets the field named by the result of the target expression to the result of the value expression. This function can be used to manipulate fields whose names are not statically known, but computed at runtime.

      For more information, see setField().

    • The getField() query function is introduced. It takes an expression, source, and sets the field defined by as to the result of the source expression. This function can be used to manipulate fields whose names are not statically known, but computed at runtime.

      For more information, see getField().

  • Other

    • The split by AWS record setting within ingest feeds will now accept numbers with leading zeros.

    • The missing-cluster-nodes metric will now track the nodes that are missing heartbeat data in addition to the nodes that have outdated heartbeat data. The new missing-cluster-nodes-stateful metric will track the registered nodes with outdated/missing heartbeat data that can write to global.

      For more information, see Node-Level Metrics.

    • The default IP filter for IdP and RDNS operations is now more restrictive: RDNS now defaults to denying lookups of reserved IP ranges and the filter has been updated to deny additional reserved IP ranges, as specified by the IANA. Self hosted administrators can specify their own filters by using the environment variables IP_FILTER_IDP, IP_FILTER_RDNS, and IP_FILTER_RDNS_SERVER respectively.

Fixed in this release

  • UI Changes

    • Field aliases could not be read on the sandbox repository. This issue is now fixed.

    • CSV files produced by LogScale for sending as attachments from email actions or uploaded through a LogScale Repository action could contain values where part of the text was duplicated. This would only happen for values that needed to be quoted. This issue is now fixed.

  • Automation and Alerts

    • Filter Alerts with field-based throttling could trigger on two events with the same value for the throttle field, if actions were slow. This issue is now fixed.

  • Dashboards and Widgets

    • Shared dashboards created on the special humio-search-all view wouldn't load correctly. This issue has now been fixed.

    • A dashboard with fixed shared time as default would not update correctly when selecting a new relative time. This issue is now fixed.

  • Ingestion

    • Fixed an issue that prevented the creation of Netflow/UDP protocol ingest listeners.

    • Cloning a parser from the UI would not clone the fields to be removed before parsing. This issue is now fixed.

  • Queries

    • Multiple clients might trigger concurrent computation of the result step for a shared query. This issue has been fixed: now only one pending computation is allowed at a time.

  • Functions

  • Other

    • An issue with the IOC Configuration causing the local database to update too often has now been fixed.

  • Packages

    • Uploading a package zip would fail on Windows devices. This issue has been fixed.

    • Updating a package could fail, if one of the assets from the package had been deleted from the view where the package was installed. This issue has been fixed.

    • When attempting to upload a package disguised as a folder, some browsers would get a generic error messages. To fix this issue, only zip files are accepted now.

Early Access

Improvement

  • Storage

    • Moved the work of creating a global snapshot for upload to bucket storage from the thread coordinating segment uploads/downloads to a separate thread. This improves the reliability of uploading and download the global snapshot to/from bucket storage.

    • SegmentChangesJobTrigger has been disabled on nodes configured to not be able to store segments, thus saving some CPU time.

  • Configuration

    • The default value for AUTOSHARDING_MAX has changed from 128 to 1,024.

    • The default maximum limit for groupBy() has been increased from 200,000 to 1,000,000, meaning that this function can now be asked to collect up to a million groups. However, due to stability concerns it will not allow groupBy() to return the full million rows as a result when this function is the last aggregator: this is governed by the QueryResultRowCountLimit dynamic configuration, which remains unchanged. Therefore, this new limit is best utilized when groupBy() is used as a computational tool for creating groups that are then later aggressively filtered and/or aggregated down in size. If you experience resource strain or starvation on your cluster, you can reduce the maximum limit via the GroupMaxLimit dynamic configuration.

    • The default value for AUTOSHARDING_TRIGGER_DELAY_MS has changed from 1 hour to 4 hours.

    • The default memory limit for the query coordinator node has been increased from 400 MB to 4 GB. This new limit allows each query to use up to 1 GB of memory and thus produce more results, at the cost of taking up more resources. This in turn indirectly limits the amount of concurrent queries as the query scheduler may choose not to run a given query before existing queries have completed. If you experience resource strain or starvation on your cluster, you can reduce the memory limit by setting the QueryCoordinatorMemoryLimit dynamic configuration to 400,000,000.

  • Functions

    • Live queries now restart and run with the updated version of a saved query when the saved query changes.

      For more information, see User Functions (Saved Searches).

    • Reduction of memory requirements when processing empty arrays in functions that accept them. This helps reduce the memory required to use these functions with empty arrays.

  • Other

    • Improved handling of segments being replaced due to either merging or event redaction, to address rare cases of event duplication when segments are replaced multiple times shortly after each other.

Falcon LogScale 1.131.2 LTS (2024-05-14)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.131.2LTS2024-05-14

Cloud

2025-04-30No1.106No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.131.2/server-1.131.2.tar.gz

These notes include entries from the following previous releases: 1.131.1

Bug fixes and updates.

Removed

Items that have been removed as of this release.

GraphQL API

  • The enabledFeatures() query has been removed from GraphQL schema. Use featureFlags() query instead.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The assetType GraphQL field on Alert, Dashboard, Parser, SavedQuery and ViewInteraction datatypes has been deprecated and will be removed in version 1.136 of LogScale.

  • The any argument to the type parameter of sort() and table() has been deprecated and will be removed in version 1.142.

    Warnings prompts will be shown in queries that fall into either of these two cases:

    • If you are explicitly supplying an any argument, please either simply remove both the parameter and the argument, for example change sort(..., type=any) to sort(...) or supply the argument for type that corresponds to your data.

    • If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of type parameter to hex e.g. sort(..., type=hex).

    In all other cases, no action is needed.

    The new default value for sort() and table() will be number. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument for type.

  • In the GraphQL API, the ChangeTriggersAndAction enum value for both the Permission and ViewAction enum is now deprecated and will be removed in version 1.136 of LogScale.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.

    • The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.

      • force field is renamed to allowOverwritingExistingParser.

      • sourceCode field is renamed to script.

      • tagFields field is renamed to fieldsToTag.

      • languageVersion is no longer an enum, but a LanguageVersionInputType instead.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    • The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:

      • The mutation returns boolean to represent success or failure, instead of a Parser wrapped in an object.

    • The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:

      • The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • The new test cases can contain assertions about the contents of the output.

      • The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.

      • The mutation now accepts both a language version and list of fields to be removed before parsing.

      • The parserScript field is renamed to script.

      • The tagFields field is renamed to fieldsToTag.

    • The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • sourceCode field, used to updating the parser script, is changed to the script field, which takes a UpdateParserScriptInput object. This updates the parser script and the language version together.

      • tagFields field is renamed to fieldsToTag.

      • The languageVersion is located inside the UpdateParserScriptInput object, and is no longer an enum, but a LanguageVersionInputType instead.

      • The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    On the Parser type:

    • testData field is deprecated and replaced by testCases.

    • sourceCode field is deprecated and replaced by script.

    • tagFields field is deprecated and replaced by fieldsToTag.

    For more information, see Parser , DeleteParserInput , LanguageVersionInputType , createParserV2() , testParserV2() , updateParserV2() .

  • In the GraphQL API, the name argument to the parser field on the Repository datatype has been deprecated and will be removed in version 1.136 of LogScale.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Security

    • DNS caches are now invalidated after 60 seconds instead of never. To override this behavior, set the security policy networkaddress.cache.ttl in the security manager of the JRE (see Java Networking Properties).

  • Ingestion

    • It is no longer possible to delete a parser that is being used in an ingest feed. The parser must first be removed from the ingest feed.

      For more information, see Delete an Ingest Feed.

Upgrades

Changes that may occur or be required during an upgrade.

  • Installation and Deployment

    • The minimum required LogScale version to upgrade from has been raised to 1.106, in order to remove some workarounds for compatibility with old versions.

New features and improvements

  • Security

    • Added support for authorizing with an external JWT from an IdP setup in our cloud environment.

    • The audience for dynamic OIDC IdPs in our cloud environments are now logscale-$orgId, where $orgId is the ID of your organization.

    • Added support for Okta federated IdP OIDC extension to identity providers setup in cloud.

  • UI Changes

    • Time zone data has been updated to IANA 2024a and has been trimmed to +/- 5 years from the release date of IANA 2024a.

  • Automation and Alerts

    • Throttling and field-based throttling have been introduced as optional functionalities in Filter Alerts. The minimum throttling period is 1 minute.

    • The customizable trigger limit for Filter Alerts is removed. The trigger limit is now automatically determined based on the associated actions. If one or more email actions are associated, the trigger limit will be 15, otherwise, the trigger limit will be 100. Any existing customizable trigger limit of 1 will be treated as a throttling period of 1 minute, all other custom trigger limits will be ignored. This is a non-backwards compatible change to the GraphQL APIs for Filter Alerts, so any automation for these alerts must be updated.

  • GraphQL API

  • Configuration

    • The new dynamic configuration MaxOpenSegmentsOnWorker is implemented to control hard cap on open segment files for the scheduler. The scheduler should in most cases not reach this limit and it only acts as a backstop. Therefore, we recommend that administrators do not modify this setting unless advised to do so by CrowdStrike Support.

    • Authorization attempted via JWT tokens will now only try to grab user information from the user info endpoint if the scope in the access token contains any of the following: profile, email, openid. If no such scope is located in the token, LogScale will try to extract the username from the token and no other user details will be added. We will extract the scope claim based on the new environment variable OIDC_SCOPE_CLAIM, whose default is scope.

  • Ingestion

  • Queries

    • Queries are now allowed to be queued for start by the query coordinator for a maximum of 10 minutes.

      For more information, see Query Coordination.

  • Functions

    • The parseTimestamp() function is now able to parse timestamps with nanosecond precision.

    • The setField() query function is introduced. It takes two expressions, target and value and sets the field named by the result of the target expression to the result of the value expression. This function can be used to manipulate fields whose names are not statically known, but computed at runtime.

      For more information, see setField().

    • The getField() query function is introduced. It takes an expression, source, and sets the field defined by as to the result of the source expression. This function can be used to manipulate fields whose names are not statically known, but computed at runtime.

      For more information, see getField().

  • Other

    • The split by AWS record setting within ingest feeds will now accept numbers with leading zeros.

    • The missing-cluster-nodes metric will now track the nodes that are missing heartbeat data in addition to the nodes that have outdated heartbeat data. The new missing-cluster-nodes-stateful metric will track the registered nodes with outdated/missing heartbeat data that can write to global.

      For more information, see Node-Level Metrics.

    • The default IP filter for IdP and RDNS operations is now more restrictive: RDNS now defaults to denying lookups of reserved IP ranges and the filter has been updated to deny additional reserved IP ranges, as specified by the IANA. Self hosted administrators can specify their own filters by using the environment variables IP_FILTER_IDP, IP_FILTER_RDNS, and IP_FILTER_RDNS_SERVER respectively.

Fixed in this release

  • UI Changes

    • Field aliases could not be read on the sandbox repository. This issue is now fixed.

    • CSV files produced by LogScale for sending as attachments from email actions or uploaded through a LogScale Repository action could contain values where part of the text was duplicated. This would only happen for values that needed to be quoted. This issue is now fixed.

  • Automation and Alerts

    • Filter Alerts with field-based throttling could trigger on two events with the same value for the throttle field, if actions were slow. This issue is now fixed.

  • Dashboards and Widgets

    • A dashboard with fixed shared time as default would not update correctly when selecting a new relative time. This issue is now fixed.

  • Ingestion

    • Fixed an issue that prevented the creation of Netflow/UDP protocol ingest listeners.

    • Cloning a parser from the UI would not clone the fields to be removed before parsing. This issue is now fixed.

  • Queries

    • Multiple clients might trigger concurrent computation of the result step for a shared query. This issue has been fixed: now only one pending computation is allowed at a time.

  • Other

    • An issue with the IOC Configuration causing the local database to update too often has now been fixed.

  • Packages

    • Uploading a package zip would fail on Windows devices. This issue has been fixed.

    • Updating a package could fail, if one of the assets from the package had been deleted from the view where the package was installed. This issue has been fixed.

    • When attempting to upload a package disguised as a folder, some browsers would get a generic error messages. To fix this issue, only zip files are accepted now.

Early Access

Improvement

  • Storage

    • Moved the work of creating a global snapshot for upload to bucket storage from the thread coordinating segment uploads/downloads to a separate thread. This improves the reliability of uploading and download the global snapshot to/from bucket storage.

    • SegmentChangesJobTrigger has been disabled on nodes configured to not be able to store segments, thus saving some CPU time.

  • Configuration

    • The default value for AUTOSHARDING_MAX has changed from 128 to 1,024.

    • The default maximum limit for groupBy() has been increased from 200,000 to 1,000,000, meaning that this function can now be asked to collect up to a million groups. However, due to stability concerns it will not allow groupBy() to return the full million rows as a result when this function is the last aggregator: this is governed by the QueryResultRowCountLimit dynamic configuration, which remains unchanged. Therefore, this new limit is best utilized when groupBy() is used as a computational tool for creating groups that are then later aggressively filtered and/or aggregated down in size. If you experience resource strain or starvation on your cluster, you can reduce the maximum limit via the GroupMaxLimit dynamic configuration.

    • The default value for AUTOSHARDING_TRIGGER_DELAY_MS has changed from 1 hour to 4 hours.

    • The default memory limit for the query coordinator node has been increased from 400 MB to 4 GB. This new limit allows each query to use up to 1 GB of memory and thus produce more results, at the cost of taking up more resources. This in turn indirectly limits the amount of concurrent queries as the query scheduler may choose not to run a given query before existing queries have completed. If you experience resource strain or starvation on your cluster, you can reduce the memory limit by setting the QueryCoordinatorMemoryLimit dynamic configuration to 400,000,000.

  • Functions

    • Live queries now restart and run with the updated version of a saved query when the saved query changes.

      For more information, see User Functions (Saved Searches).

    • Reduction of memory requirements when processing empty arrays in functions that accept them. This helps reduce the memory required to use these functions with empty arrays.

  • Other

    • Improved handling of segments being replaced due to either merging or event redaction, to address rare cases of event duplication when segments are replaced multiple times shortly after each other.

Falcon LogScale 1.131.1 LTS (2024-04-17)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.131.1LTS2024-04-17

Cloud

2025-04-30No1.106No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.131.1/server-1.131.1.tar.gz

Bug fixes and updates.

Removed

Items that have been removed as of this release.

GraphQL API

  • The enabledFeatures() query has been removed from GraphQL schema. Use featureFlags() query instead.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The assetType GraphQL field on Alert, Dashboard, Parser, SavedQuery and ViewInteraction datatypes has been deprecated and will be removed in version 1.136 of LogScale.

  • The any argument to the type parameter of sort() and table() has been deprecated and will be removed in version 1.142.

    Warnings prompts will be shown in queries that fall into either of these two cases:

    • If you are explicitly supplying an any argument, please either simply remove both the parameter and the argument, for example change sort(..., type=any) to sort(...) or supply the argument for type that corresponds to your data.

    • If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of type parameter to hex e.g. sort(..., type=hex).

    In all other cases, no action is needed.

    The new default value for sort() and table() will be number. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument for type.

  • In the GraphQL API, the ChangeTriggersAndAction enum value for both the Permission and ViewAction enum is now deprecated and will be removed in version 1.136 of LogScale.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.

    • The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.

      • force field is renamed to allowOverwritingExistingParser.

      • sourceCode field is renamed to script.

      • tagFields field is renamed to fieldsToTag.

      • languageVersion is no longer an enum, but a LanguageVersionInputType instead.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    • The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:

      • The mutation returns boolean to represent success or failure, instead of a Parser wrapped in an object.

    • The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:

      • The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • The new test cases can contain assertions about the contents of the output.

      • The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.

      • The mutation now accepts both a language version and list of fields to be removed before parsing.

      • The parserScript field is renamed to script.

      • The tagFields field is renamed to fieldsToTag.

    • The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • sourceCode field, used to updating the parser script, is changed to the script field, which takes a UpdateParserScriptInput object. This updates the parser script and the language version together.

      • tagFields field is renamed to fieldsToTag.

      • The languageVersion is located inside the UpdateParserScriptInput object, and is no longer an enum, but a LanguageVersionInputType instead.

      • The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    On the Parser type:

    • testData field is deprecated and replaced by testCases.

    • sourceCode field is deprecated and replaced by script.

    • tagFields field is deprecated and replaced by fieldsToTag.

    For more information, see Parser , DeleteParserInput , LanguageVersionInputType , createParserV2() , testParserV2() , updateParserV2() .

  • In the GraphQL API, the name argument to the parser field on the Repository datatype has been deprecated and will be removed in version 1.136 of LogScale.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Security

    • DNS caches are now invalidated after 60 seconds instead of never. To override this behavior, set the security policy networkaddress.cache.ttl in the security manager of the JRE (see Java Networking Properties).

  • Ingestion

    • It is no longer possible to delete a parser that is being used in an ingest feed. The parser must first be removed from the ingest feed.

      For more information, see Delete an Ingest Feed.

Upgrades

Changes that may occur or be required during an upgrade.

  • Installation and Deployment

    • The minimum required LogScale version to upgrade from has been raised to 1.106, in order to remove some workarounds for compatibility with old versions.

New features and improvements

  • Security

    • Added support for authorizing with an external JWT from an IdP setup in our cloud environment.

    • The audience for dynamic OIDC IdPs in our cloud environments are now logscale-$orgId, where $orgId is the ID of your organization.

    • Added support for Okta federated IdP OIDC extension to identity providers setup in cloud.

  • Automation and Alerts

    • Throttling and field-based throttling have been introduced as optional functionalities in Filter Alerts. The minimum throttling period is 1 minute.

    • The customizable trigger limit for Filter Alerts is removed. The trigger limit is now automatically determined based on the associated actions. If one or more email actions are associated, the trigger limit will be 15, otherwise, the trigger limit will be 100. Any existing customizable trigger limit of 1 will be treated as a throttling period of 1 minute, all other custom trigger limits will be ignored. This is a non-backwards compatible change to the GraphQL APIs for Filter Alerts, so any automation for these alerts must be updated.

  • GraphQL API

  • Configuration

    • The new dynamic configuration MaxOpenSegmentsOnWorker is implemented to control hard cap on open segment files for the scheduler. The scheduler should in most cases not reach this limit and it only acts as a backstop. Therefore, we recommend that administrators do not modify this setting unless advised to do so by CrowdStrike Support.

    • Authorization attempted via JWT tokens will now only try to grab user information from the user info endpoint if the scope in the access token contains any of the following: profile, email, openid. If no such scope is located in the token, LogScale will try to extract the username from the token and no other user details will be added. We will extract the scope claim based on the new environment variable OIDC_SCOPE_CLAIM, whose default is scope.

  • Ingestion

  • Queries

    • Queries are now allowed to be queued for start by the query coordinator for a maximum of 10 minutes.

      For more information, see Query Coordination.

  • Functions

    • The parseTimestamp() function is now able to parse timestamps with nanosecond precision.

    • The setField() query function is introduced. It takes two expressions, target and value and sets the field named by the result of the target expression to the result of the value expression. This function can be used to manipulate fields whose names are not statically known, but computed at runtime.

      For more information, see setField().

    • The getField() query function is introduced. It takes an expression, source, and sets the field defined by as to the result of the source expression. This function can be used to manipulate fields whose names are not statically known, but computed at runtime.

      For more information, see getField().

  • Other

    • The split by AWS record setting within ingest feeds will now accept numbers with leading zeros.

    • The missing-cluster-nodes metric will now track the nodes that are missing heartbeat data in addition to the nodes that have outdated heartbeat data. The new missing-cluster-nodes-stateful metric will track the registered nodes with outdated/missing heartbeat data that can write to global.

      For more information, see Node-Level Metrics.

    • The default IP filter for IdP and RDNS operations is now more restrictive: RDNS now defaults to denying lookups of reserved IP ranges and the filter has been updated to deny additional reserved IP ranges, as specified by the IANA. Self hosted administrators can specify their own filters by using the environment variables IP_FILTER_IDP, IP_FILTER_RDNS, and IP_FILTER_RDNS_SERVER respectively.

Fixed in this release

  • UI Changes

    • Field aliases could not be read on the sandbox repository. This issue is now fixed.

    • CSV files produced by LogScale for sending as attachments from email actions or uploaded through a LogScale Repository action could contain values where part of the text was duplicated. This would only happen for values that needed to be quoted. This issue is now fixed.

  • Automation and Alerts

    • Filter Alerts with field-based throttling could trigger on two events with the same value for the throttle field, if actions were slow. This issue is now fixed.

  • Dashboards and Widgets

    • A dashboard with fixed shared time as default would not update correctly when selecting a new relative time. This issue is now fixed.

  • Ingestion

    • Fixed an issue that prevented the creation of Netflow/UDP protocol ingest listeners.

    • Cloning a parser from the UI would not clone the fields to be removed before parsing. This issue is now fixed.

  • Queries

    • Multiple clients might trigger concurrent computation of the result step for a shared query. This issue has been fixed: now only one pending computation is allowed at a time.

  • Other

    • An issue with the IOC Configuration causing the local database to update too often has now been fixed.

  • Packages

    • Updating a package could fail, if one of the assets from the package had been deleted from the view where the package was installed. This issue has been fixed.

    • When attempting to upload a package disguised as a folder, some browsers would get a generic error messages. To fix this issue, only zip files are accepted now.

Early Access

Improvement

  • Storage

    • Moved the work of creating a global snapshot for upload to bucket storage from the thread coordinating segment uploads/downloads to a separate thread. This improves the reliability of uploading and download the global snapshot to/from bucket storage.

    • SegmentChangesJobTrigger has been disabled on nodes configured to not be able to store segments, thus saving some CPU time.

  • Configuration

    • The default value for AUTOSHARDING_MAX has changed from 128 to 1,024.

    • The default maximum limit for groupBy() has been increased from 200,000 to 1,000,000, meaning that this function can now be asked to collect up to a million groups. However, due to stability concerns it will not allow groupBy() to return the full million rows as a result when this function is the last aggregator: this is governed by the QueryResultRowCountLimit dynamic configuration, which remains unchanged. Therefore, this new limit is best utilized when groupBy() is used as a computational tool for creating groups that are then later aggressively filtered and/or aggregated down in size. If you experience resource strain or starvation on your cluster, you can reduce the maximum limit via the GroupMaxLimit dynamic configuration.

    • The default value for AUTOSHARDING_TRIGGER_DELAY_MS has changed from 1 hour to 4 hours.

    • The default memory limit for the query coordinator node has been increased from 400 MB to 4 GB. This new limit allows each query to use up to 1 GB of memory and thus produce more results, at the cost of taking up more resources. This in turn indirectly limits the amount of concurrent queries as the query scheduler may choose not to run a given query before existing queries have completed. If you experience resource strain or starvation on your cluster, you can reduce the memory limit by setting the QueryCoordinatorMemoryLimit dynamic configuration to 400,000,000.

  • Functions

    • Live queries now restart and run with the updated version of a saved query when the saved query changes.

      For more information, see User Functions (Saved Searches).

    • Reduction of memory requirements when processing empty arrays in functions that accept them. This helps reduce the memory required to use these functions with empty arrays.

  • Other

    • Improved handling of segments being replaced due to either merging or event redaction, to address rare cases of event duplication when segments are replaced multiple times shortly after each other.

Falcon LogScale 1.131.0 GA (2024-03-26)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.131.0GA2024-03-26

Cloud

2025-04-30No1.106No

Available for download two days after release.

Bug fixes and updates.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The assetType GraphQL field on Alert, Dashboard, Parser, SavedQuery and ViewInteraction datatypes has been deprecated and will be removed in version 1.136 of LogScale.

  • The any argument to the type parameter of sort() and table() has been deprecated and will be removed in version 1.142.

    Warnings prompts will be shown in queries that fall into either of these two cases:

    • If you are explicitly supplying an any argument, please either simply remove both the parameter and the argument, for example change sort(..., type=any) to sort(...) or supply the argument for type that corresponds to your data.

    • If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of type parameter to hex e.g. sort(..., type=hex).

    In all other cases, no action is needed.

    The new default value for sort() and table() will be number. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument for type.

  • In the GraphQL API, the ChangeTriggersAndAction enum value for both the Permission and ViewAction enum is now deprecated and will be removed in version 1.136 of LogScale.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.

    • The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.

      • force field is renamed to allowOverwritingExistingParser.

      • sourceCode field is renamed to script.

      • tagFields field is renamed to fieldsToTag.

      • languageVersion is no longer an enum, but a LanguageVersionInputType instead.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    • The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:

      • The mutation returns boolean to represent success or failure, instead of a Parser wrapped in an object.

    • The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:

      • The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • The new test cases can contain assertions about the contents of the output.

      • The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.

      • The mutation now accepts both a language version and list of fields to be removed before parsing.

      • The parserScript field is renamed to script.

      • The tagFields field is renamed to fieldsToTag.

    • The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • sourceCode field, used to updating the parser script, is changed to the script field, which takes a UpdateParserScriptInput object. This updates the parser script and the language version together.

      • tagFields field is renamed to fieldsToTag.

      • The languageVersion is located inside the UpdateParserScriptInput object, and is no longer an enum, but a LanguageVersionInputType instead.

      • The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    On the Parser type:

    • testData field is deprecated and replaced by testCases.

    • sourceCode field is deprecated and replaced by script.

    • tagFields field is deprecated and replaced by fieldsToTag.

    For more information, see Parser , DeleteParserInput , LanguageVersionInputType , createParserV2() , testParserV2() , updateParserV2() .

  • In the GraphQL API, the name argument to the parser field on the Repository datatype has been deprecated and will be removed in version 1.136 of LogScale.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Storage

    • We've removed a throttling behavior that prevented background merges of mini-segments from running when digest load is high. Such throttling can cause global in the LogScale cluster to grow over time if the digest load isn't transient, which is undesirable.

    • Moving mini-segments to the digest leader in cases where it is not necessary is now avoided. This new behavior reduces global traffic on digest reassignment.

    • Registering local segment files is skipped on nodes that are configured to not have storage via their node role.

    • When booting a node, wait until we've caught up to the top of global before publishing the start message. This should help avoid global publish timeouts on boot when global has a lot of traffic.

New features and improvements

  • UI Changes

    • The parser test window width can now be resized.

  • Other

    • The metrics endpoint for the scheduled report render node has been updated to output the Prometheus text based format.

Fixed in this release

  • UI Changes

    • Duplicate HTML escape has been removed to prevent recursive field references having double escaped formatting in emails.

  • Storage

    • We've fixed a rarely hit error in the query scheduler causing a ClassCastException for scala.runtime.Nothing..

  • Functions

    • join() function has been fixed as warnings of the sub-query would not propagate to the main-query result.

    • Serialization of very large query states would crash nodes by requesting an array larger than what the JVM can allocate. This issue has been fixed.

Early Access

  • Functions

Improvement

  • Storage

    • Concurrency for segment merging is improved, thus avoiding some unnecessary and inefficient pauses in execution.

    • We've switched to running the RetentionJob in a separate thread from DataSyncJob. This should enable more consistent merging.

    • The RetentionJob work is now divided among nodes such that there's no overlap. This reduces traffic in global.

    • An internal limit on use of off-heap memory has been adjusted to allow more threads to perform segment merging in parallel.

  • Functions

    • Some performance improvements have been made to the join() function, allowing it to skip blocks that do not contain the specified fields of the main and sub-query.

Falcon LogScale 1.130.0 GA (2024-03-19)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.130.0GA2024-03-19

Cloud

2025-04-30No1.106No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • We aim to stop publishing the jar distribution of LogScale (e.g. server-1.117.jar) as of LogScale version 1.130.0.

      Users deploying via Docker images are not affected. Users deploying on bare metal should ensure they deploy the tar artifact, and not the jar artifact.

      A migration guide for bare metal deployments is available at How-To: Migrating from server.jar to Launcher Startup.

    • We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The assetType GraphQL field on Alert, Dashboard, Parser, SavedQuery and ViewInteraction datatypes has been deprecated and will be removed in version 1.136 of LogScale.

  • The any argument to the type parameter of sort() and table() has been deprecated and will be removed in version 1.142.

    Warnings prompts will be shown in queries that fall into either of these two cases:

    • If you are explicitly supplying an any argument, please either simply remove both the parameter and the argument, for example change sort(..., type=any) to sort(...) or supply the argument for type that corresponds to your data.

    • If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of type parameter to hex e.g. sort(..., type=hex).

    In all other cases, no action is needed.

    The new default value for sort() and table() will be number. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument for type.

  • The humio Docker image is deprecated in favor of humio-core. humio is no longer considered suitable for production use, as it runs Kafka and ZooKeeper on the same host as LogScale, which our deployment guidelines no longer recommend. The final release of humio Docker image will be in version 1.130.0.

    The new humio-single-node-demo image is an all-in-one container suitable for quick and easy demonstration setups, but which is entirely unsupported for production use.

    For more information, see Installing Using Containers.

  • In the GraphQL API, the ChangeTriggersAndAction enum value for both the Permission and ViewAction enum is now deprecated and will be removed in version 1.136 of LogScale.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.

    • The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.

      • force field is renamed to allowOverwritingExistingParser.

      • sourceCode field is renamed to script.

      • tagFields field is renamed to fieldsToTag.

      • languageVersion is no longer an enum, but a LanguageVersionInputType instead.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    • The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:

      • The mutation returns boolean to represent success or failure, instead of a Parser wrapped in an object.

    • The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:

      • The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • The new test cases can contain assertions about the contents of the output.

      • The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.

      • The mutation now accepts both a language version and list of fields to be removed before parsing.

      • The parserScript field is renamed to script.

      • The tagFields field is renamed to fieldsToTag.

    • The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • sourceCode field, used to updating the parser script, is changed to the script field, which takes a UpdateParserScriptInput object. This updates the parser script and the language version together.

      • tagFields field is renamed to fieldsToTag.

      • The languageVersion is located inside the UpdateParserScriptInput object, and is no longer an enum, but a LanguageVersionInputType instead.

      • The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    On the Parser type:

    • testData field is deprecated and replaced by testCases.

    • sourceCode field is deprecated and replaced by script.

    • tagFields field is deprecated and replaced by fieldsToTag.

    For more information, see Parser , DeleteParserInput , LanguageVersionInputType , createParserV2() , testParserV2() , updateParserV2() .

  • In the GraphQL API, the name argument to the parser field on the Repository datatype has been deprecated and will be removed in version 1.136 of LogScale.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Security

    • DNS caches are now invalidated after 60 seconds instead of never. To override this behavior, set the security policy networkaddress.cache.ttl in the security manager of the JRE (see Java Networking Properties).

New features and improvements

  • Functions

    • The parseTimestamp() function is now able to parse timestamps with nanosecond precision.

Fixed in this release

  • Automation and Alerts

    • Filter Alerts with field-based throttling could trigger on two events with the same value for the throttle field, if actions were slow. This issue is now fixed.

  • Dashboards and Widgets

    • A dashboard with fixed shared time as default would not update correctly when selecting a new relative time. This issue is now fixed.

Early Access

Improvement

  • Storage

    • Moved the work of creating a global snapshot for upload to bucket storage from the thread coordinating segment uploads/downloads to a separate thread. This improves the reliability of uploading and download the global snapshot to/from bucket storage.

  • Functions

    • Reduction of memory requirements when processing empty arrays in functions that accept them. This helps reduce the memory required to use these functions with empty arrays.

Falcon LogScale 1.129.0 GA (2024-03-12)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.129.0GA2024-03-12

Cloud

2025-04-30No1.106No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • We aim to stop publishing the jar distribution of LogScale (e.g. server-1.117.jar) as of LogScale version 1.130.0.

      Users deploying via Docker images are not affected. Users deploying on bare metal should ensure they deploy the tar artifact, and not the jar artifact.

      A migration guide for bare metal deployments is available at How-To: Migrating from server.jar to Launcher Startup.

    • We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.

Removed

Items that have been removed as of this release.

GraphQL API

  • The enabledFeatures() query has been removed from GraphQL schema. Use featureFlags() query instead.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The assetType GraphQL field on Alert, Dashboard, Parser, SavedQuery and ViewInteraction datatypes has been deprecated and will be removed in version 1.136 of LogScale.

  • The any argument to the type parameter of sort() and table() has been deprecated and will be removed in version 1.142.

    Warnings prompts will be shown in queries that fall into either of these two cases:

    • If you are explicitly supplying an any argument, please either simply remove both the parameter and the argument, for example change sort(..., type=any) to sort(...) or supply the argument for type that corresponds to your data.

    • If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of type parameter to hex e.g. sort(..., type=hex).

    In all other cases, no action is needed.

    The new default value for sort() and table() will be number. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument for type.

  • The humio Docker image is deprecated in favor of humio-core. humio is no longer considered suitable for production use, as it runs Kafka and ZooKeeper on the same host as LogScale, which our deployment guidelines no longer recommend. The final release of humio Docker image will be in version 1.130.0.

    The new humio-single-node-demo image is an all-in-one container suitable for quick and easy demonstration setups, but which is entirely unsupported for production use.

    For more information, see Installing Using Containers.

  • In the GraphQL API, the ChangeTriggersAndAction enum value for both the Permission and ViewAction enum is now deprecated and will be removed in version 1.136 of LogScale.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.

    • The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.

      • force field is renamed to allowOverwritingExistingParser.

      • sourceCode field is renamed to script.

      • tagFields field is renamed to fieldsToTag.

      • languageVersion is no longer an enum, but a LanguageVersionInputType instead.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    • The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:

      • The mutation returns boolean to represent success or failure, instead of a Parser wrapped in an object.

    • The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:

      • The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • The new test cases can contain assertions about the contents of the output.

      • The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.

      • The mutation now accepts both a language version and list of fields to be removed before parsing.

      • The parserScript field is renamed to script.

      • The tagFields field is renamed to fieldsToTag.

    • The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • sourceCode field, used to updating the parser script, is changed to the script field, which takes a UpdateParserScriptInput object. This updates the parser script and the language version together.

      • tagFields field is renamed to fieldsToTag.

      • The languageVersion is located inside the UpdateParserScriptInput object, and is no longer an enum, but a LanguageVersionInputType instead.

      • The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    On the Parser type:

    • testData field is deprecated and replaced by testCases.

    • sourceCode field is deprecated and replaced by script.

    • tagFields field is deprecated and replaced by fieldsToTag.

    For more information, see Parser , DeleteParserInput , LanguageVersionInputType , createParserV2() , testParserV2() , updateParserV2() .

  • In the GraphQL API, the name argument to the parser field on the Repository datatype has been deprecated and will be removed in version 1.136 of LogScale.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Ingestion

    • We have reverted the behavior of blocking heavy queries in case of high ingest, and returned to the behavior of only stopping the query, due to issues caused by the blockage. Heavy queries causing ingest delay will be handled differently in a future version release.

Upgrades

Changes that may occur or be required during an upgrade.

  • Installation and Deployment

    • The minimum required LogScale version to upgrade from has been raised to 1.106, in order to remove some workarounds for compatibility with old versions.

New features and improvements

  • Security

    • Added support for authorizing with an external JWT from an IdP setup in our cloud environment.

    • The audience for dynamic OIDC IdPs in our cloud environments are now logscale-$orgId, where $orgId is the ID of your organization.

    • Added support for Okta federated IdP OIDC extension to identity providers setup in cloud.

  • Automation and Alerts

    • Throttling and field-based throttling have been introduced as optional functionalities in Filter Alerts. The minimum throttling period is 1 minute.

    • The customizable trigger limit for Filter Alerts is removed. The trigger limit is now automatically determined based on the associated actions. If one or more email actions are associated, the trigger limit will be 15, otherwise, the trigger limit will be 100. Any existing customizable trigger limit of 1 will be treated as a throttling period of 1 minute, all other custom trigger limits will be ignored. This is a non-backwards compatible change to the GraphQL APIs for Filter Alerts, so any automation for these alerts must be updated.

  • GraphQL API

  • Configuration

    • Authorization attempted via JWT tokens will now only try to grab user information from the user info endpoint if the scope in the access token contains any of the following: profile, email, openid. If no such scope is located in the token, LogScale will try to extract the username from the token and no other user details will be added. We will extract the scope claim based on the new environment variable OIDC_SCOPE_CLAIM, whose default is scope.

  • Ingestion

  • Other

    • The split by AWS record setting within ingest feeds will now accept numbers with leading zeros.

    • The default IP filter for IdP and RDNS operations is now more restrictive: RDNS now defaults to denying lookups of reserved IP ranges and the filter has been updated to deny additional reserved IP ranges, as specified by the IANA. Self hosted administrators can specify their own filters by using the environment variables IP_FILTER_IDP, IP_FILTER_RDNS, and IP_FILTER_RDNS_SERVER respectively.

Fixed in this release

  • Ingestion

    • Cloning a parser from the UI would not clone the fields to be removed before parsing. This issue is now fixed.

Improvement

  • Other

    • Improved handling of segments being replaced due to either merging or event redaction, to address rare cases of event duplication when segments are replaced multiple times shortly after each other.

Falcon LogScale 1.128.0 GA (2024-03-05)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.128.0GA2024-03-05

Cloud

2025-04-30No1.70.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • We aim to stop publishing the jar distribution of LogScale (e.g. server-1.117.jar) as of LogScale version 1.130.0.

      Users deploying via Docker images are not affected. Users deploying on bare metal should ensure they deploy the tar artifact, and not the jar artifact.

      A migration guide for bare metal deployments is available at How-To: Migrating from server.jar to Launcher Startup.

    • We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The assetType GraphQL field on Alert, Dashboard, Parser, SavedQuery and ViewInteraction datatypes has been deprecated and will be removed in version 1.136 of LogScale.

  • The any argument to the type parameter of sort() and table() has been deprecated and will be removed in version 1.142.

    Warnings prompts will be shown in queries that fall into either of these two cases:

    • If you are explicitly supplying an any argument, please either simply remove both the parameter and the argument, for example change sort(..., type=any) to sort(...) or supply the argument for type that corresponds to your data.

    • If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of type parameter to hex e.g. sort(..., type=hex).

    In all other cases, no action is needed.

    The new default value for sort() and table() will be number. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument for type.

  • The humio Docker image is deprecated in favor of humio-core. humio is no longer considered suitable for production use, as it runs Kafka and ZooKeeper on the same host as LogScale, which our deployment guidelines no longer recommend. The final release of humio Docker image will be in version 1.130.0.

    The new humio-single-node-demo image is an all-in-one container suitable for quick and easy demonstration setups, but which is entirely unsupported for production use.

    For more information, see Installing Using Containers.

  • In the GraphQL API, the ChangeTriggersAndAction enum value for both the Permission and ViewAction enum is now deprecated and will be removed in version 1.136 of LogScale.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • In the GraphQL API, the name argument to the parser field on the Repository datatype has been deprecated and will be removed in version 1.136 of LogScale.

New features and improvements

  • Configuration

    • The new dynamic configuration MaxOpenSegmentsOnWorker is implemented to control hard cap on open segment files for the scheduler. The scheduler should in most cases not reach this limit and it only acts as a backstop. Therefore, we recommend that administrators do not modify this setting unless advised to do so by CrowdStrike Support.

Fixed in this release

  • UI Changes

    • CSV files produced by LogScale for sending as attachments from email actions or uploaded through a LogScale Repository action could contain values where part of the text was duplicated. This would only happen for values that needed to be quoted. This issue is now fixed.

  • Packages

    • When attempting to upload a package disguised as a folder, some browsers would get a generic error messages. To fix this issue, only zip files are accepted now.

Improvement

Falcon LogScale 1.127.0 GA (2024-02-27)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.127.0GA2024-02-27

Cloud

2025-04-30No1.70.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • We aim to stop publishing the jar distribution of LogScale (e.g. server-1.117.jar) as of LogScale version 1.130.0.

      Users deploying via Docker images are not affected. Users deploying on bare metal should ensure they deploy the tar artifact, and not the jar artifact.

      A migration guide for bare metal deployments is available at How-To: Migrating from server.jar to Launcher Startup.

    • We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The assetType GraphQL field on Alert, Dashboard, Parser, SavedQuery and ViewInteraction datatypes has been deprecated and will be removed in version 1.136 of LogScale.

  • The any argument to the type parameter of sort() and table() has been deprecated and will be removed in version 1.142.

    Warnings prompts will be shown in queries that fall into either of these two cases:

    • If you are explicitly supplying an any argument, please either simply remove both the parameter and the argument, for example change sort(..., type=any) to sort(...) or supply the argument for type that corresponds to your data.

    • If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of type parameter to hex e.g. sort(..., type=hex).

    In all other cases, no action is needed.

    The new default value for sort() and table() will be number. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument for type.

  • The humio Docker image is deprecated in favor of humio-core. humio is no longer considered suitable for production use, as it runs Kafka and ZooKeeper on the same host as LogScale, which our deployment guidelines no longer recommend. The final release of humio Docker image will be in version 1.130.0.

    The new humio-single-node-demo image is an all-in-one container suitable for quick and easy demonstration setups, but which is entirely unsupported for production use.

    For more information, see Installing Using Containers.

  • In the GraphQL API, the ChangeTriggersAndAction enum value for both the Permission and ViewAction enum is now deprecated and will be removed in version 1.136 of LogScale.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • In the GraphQL API, the name argument to the parser field on the Repository datatype has been deprecated and will be removed in version 1.136 of LogScale.

New features and improvements

  • Functions

    • The setField() query function is introduced. It takes two expressions, target and value and sets the field named by the result of the target expression to the result of the value expression. This function can be used to manipulate fields whose names are not statically known, but computed at runtime.

      For more information, see setField().

    • The getField() query function is introduced. It takes an expression, source, and sets the field defined by as to the result of the source expression. This function can be used to manipulate fields whose names are not statically known, but computed at runtime.

      For more information, see getField().

Fixed in this release

Improvement

  • Configuration

    • The default maximum limit for groupBy() has been increased from 200,000 to 1,000,000, meaning that this function can now be asked to collect up to a million groups. However, due to stability concerns it will not allow groupBy() to return the full million rows as a result when this function is the last aggregator: this is governed by the QueryResultRowCountLimit dynamic configuration, which remains unchanged. Therefore, this new limit is best utilized when groupBy() is used as a computational tool for creating groups that are then later aggressively filtered and/or aggregated down in size. If you experience resource strain or starvation on your cluster, you can reduce the maximum limit via the GroupMaxLimit dynamic configuration.

    • The default memory limit for the query coordinator node has been increased from 400 MB to 4 GB. This new limit allows each query to use up to 1 GB of memory and thus produce more results, at the cost of taking up more resources. This in turn indirectly limits the amount of concurrent queries as the query scheduler may choose not to run a given query before existing queries have completed. If you experience resource strain or starvation on your cluster, you can reduce the memory limit by setting the QueryCoordinatorMemoryLimit dynamic configuration to 400,000,000.

  • Functions

    • Live queries now restart and run with the updated version of a saved query when the saved query changes.

      For more information, see User Functions (Saved Searches).

Falcon LogScale 1.126.0 GA (2024-02-20)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.126.0GA2024-02-20

Cloud

2025-04-30No1.70.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • We aim to stop publishing the jar distribution of LogScale (e.g. server-1.117.jar) as of LogScale version 1.130.0.

      Users deploying via Docker images are not affected. Users deploying on bare metal should ensure they deploy the tar artifact, and not the jar artifact.

      A migration guide for bare metal deployments is available at How-To: Migrating from server.jar to Launcher Startup.

    • We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The assetType GraphQL field on Alert, Dashboard, Parser, SavedQuery and ViewInteraction datatypes has been deprecated and will be removed in version 1.136 of LogScale.

  • The any argument to the type parameter of sort() and table() has been deprecated and will be removed in version 1.142.

    Warnings prompts will be shown in queries that fall into either of these two cases:

    • If you are explicitly supplying an any argument, please either simply remove both the parameter and the argument, for example change sort(..., type=any) to sort(...) or supply the argument for type that corresponds to your data.

    • If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of type parameter to hex e.g. sort(..., type=hex).

    In all other cases, no action is needed.

    The new default value for sort() and table() will be number. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument for type.

  • The humio Docker image is deprecated in favor of humio-core. humio is no longer considered suitable for production use, as it runs Kafka and ZooKeeper on the same host as LogScale, which our deployment guidelines no longer recommend. The final release of humio Docker image will be in version 1.130.0.

    The new humio-single-node-demo image is an all-in-one container suitable for quick and easy demonstration setups, but which is entirely unsupported for production use.

    For more information, see Installing Using Containers.

  • In the GraphQL API, the ChangeTriggersAndAction enum value for both the Permission and ViewAction enum is now deprecated and will be removed in version 1.136 of LogScale.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • In the GraphQL API, the name argument to the parser field on the Repository datatype has been deprecated and will be removed in version 1.136 of LogScale.

New features and improvements

  • Configuration

    • Ingest rate monitoring for autosharding improved. For clusters with more than 10 nodes, only a subset of the nodes will be reporting their ingest rate for any given datasource, and the total rate for each datasource estimated based on that. The dynamic configuration TargetMaxRateForDatasource still sets the threshold for sharding; however, once the rate is exceeded, it is no longer needed to be twice the TargetMaxRateForDatasource configuration before shards are added.

  • Ingestion

    • Ingest feeds can read from an AWS SQS queue that has been populated with AWS SNS subscription events.

      For more information, see Ingest Data from AWS S3.

Fixed in this release

  • UI Changes

    • Field aliases could not be read on the sandbox repository. This issue is now fixed.

  • Other

    • An issue with the IOC Configuration causing the local database to update too often has now been fixed.

  • Packages

    • Updating a package could fail, if one of the assets from the package had been deleted from the view where the package was installed. This issue has been fixed.

Falcon LogScale 1.125.0 GA (2024-02-13)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.125.0GA2024-02-13

Cloud

2025-04-30No1.70.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • We aim to stop publishing the jar distribution of LogScale (e.g. server-1.117.jar) as of LogScale version 1.130.0.

      Users deploying via Docker images are not affected. Users deploying on bare metal should ensure they deploy the tar artifact, and not the jar artifact.

      A migration guide for bare metal deployments is available at How-To: Migrating from server.jar to Launcher Startup.

    • We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The assetType GraphQL field on Alert, Dashboard, Parser, SavedQuery and ViewInteraction datatypes has been deprecated and will be removed in version 1.136 of LogScale.

  • The any argument to the type parameter of sort() and table() has been deprecated and will be removed in version 1.142.

    Warnings prompts will be shown in queries that fall into either of these two cases:

    • If you are explicitly supplying an any argument, please either simply remove both the parameter and the argument, for example change sort(..., type=any) to sort(...) or supply the argument for type that corresponds to your data.

    • If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of type parameter to hex e.g. sort(..., type=hex).

    In all other cases, no action is needed.

    The new default value for sort() and table() will be number. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument for type.

  • The humio Docker image is deprecated in favor of humio-core. humio is no longer considered suitable for production use, as it runs Kafka and ZooKeeper on the same host as LogScale, which our deployment guidelines no longer recommend. The final release of humio Docker image will be in version 1.130.0.

    The new humio-single-node-demo image is an all-in-one container suitable for quick and easy demonstration setups, but which is entirely unsupported for production use.

    For more information, see Installing Using Containers.

  • In the GraphQL API, the ChangeTriggersAndAction enum value for both the Permission and ViewAction enum is now deprecated and will be removed in version 1.136 of LogScale.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • In the GraphQL API, the name argument to the parser field on the Repository datatype has been deprecated and will be removed in version 1.136 of LogScale.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Ingestion

    • It is no longer possible to delete a parser that is being used in an ingest feed. The parser must first be removed from the ingest feed.

      For more information, see Delete an Ingest Feed.

New features and improvements

  • Other

    • The missing-cluster-nodes metric will now track the nodes that are missing heartbeat data in addition to the nodes that have outdated heartbeat data. The new missing-cluster-nodes-stateful metric will track the registered nodes with outdated/missing heartbeat data that can write to global.

      For more information, see Node-Level Metrics.

Improvement

  • Storage

    • Allowed reassignment of digest that assigns partitions unevenly to hosts. This is to support clusters where hosts are not evenly sized, and so an even partition assignment is not expected.

    • SegmentChangesJobTrigger has been disabled on nodes configured to not be able to store segments, thus saving some CPU time.

Falcon LogScale 1.124.3 LTS (2024-05-14)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.124.3LTS2024-05-14

Cloud

2025-03-01No1.70.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.124.3/server-1.124.3.tar.gz

These notes include entries from the following previous releases: 1.124.1, 1.124.2

Bug fixes and updates.

Breaking Changes

The following items create a breaking change in the behavior, response or operation of this release.

  • Functions

    • The default accuracy of the percentile() function has been adjusted. This means that any query that does not explicitly set the accuracy may see a change in reported percentile. Specifically, the percentile() function may now deviate by up to one 100th of the true percentile, meaning that if a given percentile has a true value of 1000, percentile() may report a percentile in the range of [990; 1010].

      On the flip side, percentile() now uses less memory by default, which should allow for additional series or groups when this function is used with either timeChart() or groupBy() and the default accuracy is used.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • We aim to stop publishing the jar distribution of LogScale (e.g. server-1.117.jar) as of LogScale version 1.130.0.

      Users deploying via Docker images are not affected. Users deploying on bare metal should ensure they deploy the tar artifact, and not the jar artifact.

      A migration guide for bare metal deployments is available at How-To: Migrating from server.jar to Launcher Startup.

    • We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.

Removed

Items that have been removed as of this release.

GraphQL API

  • Removed the Asset interface type in GraphQL that Alert, Dashboard, Parser, SavedQuery and ViewInteraction datatypes implemented. It was not used as a type for any field. All fields from the Asset interface type are still present in the implementing types.

Configuration

  • The DEFAULT_PARTITION_COUNT configuration parameter has been removed, as it was unused by the system due to earlier changes to partition handling.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The assetType GraphQL field on Alert, Dashboard, Parser, SavedQuery and ViewInteraction datatypes has been deprecated and will be removed in version 1.136 of LogScale.

  • The humio Docker image is deprecated in favor of humio-core. humio is no longer considered suitable for production use, as it runs Kafka and ZooKeeper on the same host as LogScale, which our deployment guidelines no longer recommend. The final release of humio Docker image will be in version 1.130.0.

    The new humio-single-node-demo image is an all-in-one container suitable for quick and easy demonstration setups, but which is entirely unsupported for production use.

    For more information, see Installing Using Containers.

  • In the GraphQL API, the ChangeTriggersAndAction enum value for both the Permission and ViewAction enum is now deprecated and will be removed in version 1.136 of LogScale.

  • The QUERY_COORDINATOR environment variable is deprecated. To control whether a node should be allowed to be a query coordinator, use the query node task instead. Node tasks can be assigned and unassigned at runtime using the assignTasks() and unassignTasks() GraphQL mutations respectively, or controlled using the INITIAL_DISABLED_NODE_TASKS environment variable.

    For more information, see INITIAL_DISABLED_NODE_TASK.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • In the GraphQL API, the name argument to the parser field on the Repository datatype has been deprecated and will be removed in version 1.136 of LogScale.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Storage

    • We have adjusted the code that calculates where to start reading from the ingest queue to be more conservative. It will no longer allow for skipping past segments that are not fully replicated when later segments on the same datasource are fully replicated. This fixes a very rare edge case that could cause data loss on clusters using ephemeral disks. Due to the changed behavior, any segment failing to properly replicate will now cause LogScale to stop deleting data from the affected Kafka partition. Cluster administrators are strongly encouraged to monitor this case, by keeping under observation Kafka's disk usage.

  • Ingestion

    • We have reverted the behavior of blocking heavy queries in case of high ingest, and returned to the behavior of only stopping the query, due to issues caused by the blockage. Heavy queries causing ingest delay will be handled differently in a future version release.

Upgrades

Changes that may occur or be required during an upgrade.

  • Installation and Deployment

    • Kafka client library has been upgraded to 3.6.1. Some minor changes have been made to serializers used by LogScale to reduce memory copying.

New features and improvements

  • UI Changes

    • Time zone data has been updated to IANA 2024a and has been trimmed to +/- 5 years from the release date of IANA 2024a.

    • Time zone data has been updated to IANA 2023d.

    • Deletion of a file that is actively used by live queries will now stop those queries.

      For more information, see Exporting or Deleting a File.

    • Multi-Cluster Search — early adopter release for Self-hosted LogScale.

      • Keep the data close to the source, search from single UI

      • Search across multiple LogScale clusters in a single view

      • Support key functionalities like alerts & dashboards

      The functionality is limited to LogScale self-hosted versions at this point.

      For more information, see LogScale Multi-Cluster Search.

    • When Manage Users, it is now possible to filter users based also on their assigned roles (for example, type admin in the Users search field).

    • The Field Aliasing feature is introduced. Implementing Field Aliasing in your workflow simplifies data correlation from various sources. With this feature, users can give alternative names — aliases — to fields created at parse time, across a view, or the entire organization. It makes data interpretation more intuitive and provides analysts with a smoother search experience.

      For more information, see Field Aliasing.

  • Automation and Alerts

    • The following changes affects the UI for Standard Alerts:

      • A minimum time window of 1 minute is introduced, since anything smaller will not produce reliable results. Any existing standard alert with a time window smaller than 1 minute will not run, instead an error notification will be shown.

      • It is no longer possible to specify the time window and the throttle period in milliseconds. Any existing standard alerts with a time window or throttle period specified in milliseconds will have it rounded to the nearest second.

      • When saving the alert, the query window is automatically changed to the largest unit in the Relative Time Syntax that can represent it. For example 24h is changed to 1d and 60s is changed to 1m.

    • The ChangeTriggersAndActions permission is now replaced by two new permissions:

      • ChangeTriggers permission is needed to edit alerts or scheduled searches.

      • ChangeActions permission is needed to edit actions as well as viewing them. Viewing the name and type of actions when editing triggers is still possible without this permission.

      Any user with the legacy ChangeTriggersAndActions permissions will by default have both. It is possible to remove one of them for more granular access controls.

    • A slow-query logging has been added when an alert is slow to start due to the query not having finished the historical part.

  • GraphQL API

    • Added limits for GraphQL queries on the total number of selected fields and fragments. Defaults are 1000 for authenticated and 150 for unauthenticated users.

      Cluster administrators can adjust these limits with the GraphQLSelectionSizeLimit and UnauthenticatedGraphQLSelectionSizeLimit dynamic configurations.

  • Storage

  • Configuration

    • The meaning of S3_STORAGE_CONCURRENCY and GCP_STORAGE_CONCURRENCY configuration variables has slightly changed. The settings are used for throttling downloads and uploads for bucket storage. Previously, a setting of S3_STORAGE_CONCURRENCY=10 for example, meant that LogScale would allow 10 concurrent uploads, and 10 concurrent downloads. Now, it means that LogScale will allow a total of 10 transfers at a time, disregarding the transfer direction.

    • New dynamic configurations have been added:

    • Ingest rate monitoring for autosharding improved. For clusters with more than 10 nodes, only a subset of the nodes will be reporting their ingest rate for any given datasource, and the total rate for each datasource estimated based on that. The dynamic configuration TargetMaxRateForDatasource still sets the threshold for sharding; however, once the rate is exceeded, it is no longer needed to be twice the TargetMaxRateForDatasource configuration before shards are added.

  • Dashboards and Widgets

    • A series of improvements has been added to the dashboard layout experience:

      • New widgets will be added in the topmost available space

      • When you drag widgets up, all widgets in the same column will move together

      • Improved experience when swapping the order of widgets (horizontally or vertically)

  • Ingestion

    • Introducing Ingest Feeds, a new pull-based ingest source that ingests logs stored in AWS S3. The files within the AWS S3 bucket can be Gzip compressed and we currently support newline delimited files and the JSON object format in which CloudTrail logs are stored in. Ingest Feeds require some configuration setup on the AWS side to get started.

      This feature is part of a gradual rollout process and may not be available on your cloud instance, but will be available to all customers in the following weeks.

      For more information, see Ingest Data from AWS S3.

    • The limits on the size of parser test cases when exporting as templates or packages has been increased.

    • The amount of logging produced by DigestLeadershipLoggerJob has been reduced in clusters with many ingest queue partitions.

  • Log Collector

    • Groups have been added to Fleet Management for the LogScale Collector. This feature makes it possible to define dynamic groups using a filter based upon a subset of the LogScale Query Language Syntax. New Collectors enrolled into the fleet will automatically be configured based upon the groups filters they match, eliminating the need for manually assigning a configuration to every new LogScale Collector. Groups also allow you to combine multiple reusable configuration snippets.

      Additionally the management of instances has been simplified and merged into this new feature, and therefore the Assigned Instances page has been removed to favor use of the Group functions.

      For more information, see Manage Groups.

  • Queries

    • The worker-level prioritization of queries has been changed. The new prioritization will attempt to divide time evenly between all users, and divide the time given to each user evenly among that user's queries.

    • Live query cost metrics corrections:

      • livequeries-rate metric has changed from long to double.

      • livequeries-rate-canceled-due-to-digest-delay metric has changed from long to double.

      For more information, see Node-Level Metrics.

  • Functions

    • The new array:length() function has been introduced. It finds the length of an array by counting the number of array entries.

      For more information, see array:length().

Fixed in this release

  • UI Changes

    • When hovering over a query function in the query editor, the link to the function documentation now always points to the latest version of the page.

  • Automation and Alerts

    • After updating Scheduled searches where the action was failing, they would constantly fail with a None.get error until they were disabled and enabled again, or the LogScale cluster was restarted. This issue is now fixed.

  • Storage

    • Fixed an issue that could cause repositories undeleted using the mechanism described at Restoring a Repository or View to be only partially restored. Some deleted datasources within the repositories could erroneously be skipped during restoration.

      For more information, see Restoring a Repository or View.

  • Dashboards and Widgets

    • Users were prevented from exporting results of queries containing multi value parameters. This issue is now fixed.

  • Queries

    • Queries in some cases would be killed as if they were blocked even though they did not match the criteria of the block. This issue is now fixed.

    • Fixed a bug in which the second poll inside the cluster could be delayed by upwards of 10 seconds. This fix ensures that the time between polls will never be later than the start time of the query, this means that early polls will not be delayed too much, enabling faster query responses.

  • Functions

    • selectLast() has been fixed for an issue that could cause this query function to miss events in certain cases.

  • Other

    • It was not possible to create a new repository with a time retention greater than 365 days. Now, the UI limit is the one that is set on the customer organization.

      Input validation on fields when creating new repositories is now also improved.

Improvement

  • Storage

    • Allowed reassignment of digest that assigns partitions unevenly to hosts. This is to support clusters where hosts are not evenly sized, and so an even partition assignment is not expected.

  • Configuration

  • Ingestion

    • The cancelling mechanism for specific costly queries has been improved to solve cases where those queries got restarted anyway: the query with the exact match on the query string is now blocked for 5 minutes. This will free enough CPU for ingest to catch up and avoid blocking queries for too long.

Falcon LogScale 1.124.2 LTS (2024-03-20)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.124.2LTS2024-03-20

Cloud

2025-03-01No1.70.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.124.2/server-1.124.2.tar.gz

These notes include entries from the following previous releases: 1.124.1

Bug fixes and updates.

Breaking Changes

The following items create a breaking change in the behavior, response or operation of this release.

  • Functions

    • The default accuracy of the percentile() function has been adjusted. This means that any query that does not explicitly set the accuracy may see a change in reported percentile. Specifically, the percentile() function may now deviate by up to one 100th of the true percentile, meaning that if a given percentile has a true value of 1000, percentile() may report a percentile in the range of [990; 1010].

      On the flip side, percentile() now uses less memory by default, which should allow for additional series or groups when this function is used with either timeChart() or groupBy() and the default accuracy is used.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • We aim to stop publishing the jar distribution of LogScale (e.g. server-1.117.jar) as of LogScale version 1.130.0.

      Users deploying via Docker images are not affected. Users deploying on bare metal should ensure they deploy the tar artifact, and not the jar artifact.

      A migration guide for bare metal deployments is available at How-To: Migrating from server.jar to Launcher Startup.

    • We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.

Removed

Items that have been removed as of this release.

GraphQL API

  • Removed the Asset interface type in GraphQL that Alert, Dashboard, Parser, SavedQuery and ViewInteraction datatypes implemented. It was not used as a type for any field. All fields from the Asset interface type are still present in the implementing types.

Configuration

  • The DEFAULT_PARTITION_COUNT configuration parameter has been removed, as it was unused by the system due to earlier changes to partition handling.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The assetType GraphQL field on Alert, Dashboard, Parser, SavedQuery and ViewInteraction datatypes has been deprecated and will be removed in version 1.136 of LogScale.

  • The humio Docker image is deprecated in favor of humio-core. humio is no longer considered suitable for production use, as it runs Kafka and ZooKeeper on the same host as LogScale, which our deployment guidelines no longer recommend. The final release of humio Docker image will be in version 1.130.0.

    The new humio-single-node-demo image is an all-in-one container suitable for quick and easy demonstration setups, but which is entirely unsupported for production use.

    For more information, see Installing Using Containers.

  • In the GraphQL API, the ChangeTriggersAndAction enum value for both the Permission and ViewAction enum is now deprecated and will be removed in version 1.136 of LogScale.

  • The QUERY_COORDINATOR environment variable is deprecated. To control whether a node should be allowed to be a query coordinator, use the query node task instead. Node tasks can be assigned and unassigned at runtime using the assignTasks() and unassignTasks() GraphQL mutations respectively, or controlled using the INITIAL_DISABLED_NODE_TASKS environment variable.

    For more information, see INITIAL_DISABLED_NODE_TASK.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • In the GraphQL API, the name argument to the parser field on the Repository datatype has been deprecated and will be removed in version 1.136 of LogScale.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Storage

    • We have adjusted the code that calculates where to start reading from the ingest queue to be more conservative. It will no longer allow for skipping past segments that are not fully replicated when later segments on the same datasource are fully replicated. This fixes a very rare edge case that could cause data loss on clusters using ephemeral disks. Due to the changed behavior, any segment failing to properly replicate will now cause LogScale to stop deleting data from the affected Kafka partition. Cluster administrators are strongly encouraged to monitor this case, by keeping under observation Kafka's disk usage.

  • Ingestion

    • We have reverted the behavior of blocking heavy queries in case of high ingest, and returned to the behavior of only stopping the query, due to issues caused by the blockage. Heavy queries causing ingest delay will be handled differently in a future version release.

Upgrades

Changes that may occur or be required during an upgrade.

  • Installation and Deployment

    • Kafka client library has been upgraded to 3.6.1. Some minor changes have been made to serializers used by LogScale to reduce memory copying.

New features and improvements

  • UI Changes

    • Time zone data has been updated to IANA 2023d.

    • Deletion of a file that is actively used by live queries will now stop those queries.

      For more information, see Exporting or Deleting a File.

    • Multi-Cluster Search — early adopter release for Self-hosted LogScale.

      • Keep the data close to the source, search from single UI

      • Search across multiple LogScale clusters in a single view

      • Support key functionalities like alerts & dashboards

      The functionality is limited to LogScale self-hosted versions at this point.

      For more information, see LogScale Multi-Cluster Search.

    • When Manage Users, it is now possible to filter users based also on their assigned roles (for example, type admin in the Users search field).

    • The Field Aliasing feature is introduced. Implementing Field Aliasing in your workflow simplifies data correlation from various sources. With this feature, users can give alternative names — aliases — to fields created at parse time, across a view, or the entire organization. It makes data interpretation more intuitive and provides analysts with a smoother search experience.

      For more information, see Field Aliasing.

  • Automation and Alerts

    • The following changes affects the UI for Standard Alerts:

      • A minimum time window of 1 minute is introduced, since anything smaller will not produce reliable results. Any existing standard alert with a time window smaller than 1 minute will not run, instead an error notification will be shown.

      • It is no longer possible to specify the time window and the throttle period in milliseconds. Any existing standard alerts with a time window or throttle period specified in milliseconds will have it rounded to the nearest second.

      • When saving the alert, the query window is automatically changed to the largest unit in the Relative Time Syntax that can represent it. For example 24h is changed to 1d and 60s is changed to 1m.

    • The ChangeTriggersAndActions permission is now replaced by two new permissions:

      • ChangeTriggers permission is needed to edit alerts or scheduled searches.

      • ChangeActions permission is needed to edit actions as well as viewing them. Viewing the name and type of actions when editing triggers is still possible without this permission.

      Any user with the legacy ChangeTriggersAndActions permissions will by default have both. It is possible to remove one of them for more granular access controls.

    • A slow-query logging has been added when an alert is slow to start due to the query not having finished the historical part.

  • GraphQL API

    • Added limits for GraphQL queries on the total number of selected fields and fragments. Defaults are 1000 for authenticated and 150 for unauthenticated users.

      Cluster administrators can adjust these limits with the GraphQLSelectionSizeLimit and UnauthenticatedGraphQLSelectionSizeLimit dynamic configurations.

  • Storage

  • Configuration

    • The meaning of S3_STORAGE_CONCURRENCY and GCP_STORAGE_CONCURRENCY configuration variables has slightly changed. The settings are used for throttling downloads and uploads for bucket storage. Previously, a setting of S3_STORAGE_CONCURRENCY=10 for example, meant that LogScale would allow 10 concurrent uploads, and 10 concurrent downloads. Now, it means that LogScale will allow a total of 10 transfers at a time, disregarding the transfer direction.

    • New dynamic configurations have been added:

    • Ingest rate monitoring for autosharding improved. For clusters with more than 10 nodes, only a subset of the nodes will be reporting their ingest rate for any given datasource, and the total rate for each datasource estimated based on that. The dynamic configuration TargetMaxRateForDatasource still sets the threshold for sharding; however, once the rate is exceeded, it is no longer needed to be twice the TargetMaxRateForDatasource configuration before shards are added.

  • Dashboards and Widgets

    • A series of improvements has been added to the dashboard layout experience:

      • New widgets will be added in the topmost available space

      • When you drag widgets up, all widgets in the same column will move together

      • Improved experience when swapping the order of widgets (horizontally or vertically)

  • Ingestion

    • Introducing Ingest Feeds, a new pull-based ingest source that ingests logs stored in AWS S3. The files within the AWS S3 bucket can be Gzip compressed and we currently support newline delimited files and the JSON object format in which CloudTrail logs are stored in. Ingest Feeds require some configuration setup on the AWS side to get started.

      This feature is part of a gradual rollout process and may not be available on your cloud instance, but will be available to all customers in the following weeks.

      For more information, see Ingest Data from AWS S3.

    • The limits on the size of parser test cases when exporting as templates or packages has been increased.

    • The amount of logging produced by DigestLeadershipLoggerJob has been reduced in clusters with many ingest queue partitions.

  • Log Collector

    • Groups have been added to Fleet Management for the LogScale Collector. This feature makes it possible to define dynamic groups using a filter based upon a subset of the LogScale Query Language Syntax. New Collectors enrolled into the fleet will automatically be configured based upon the groups filters they match, eliminating the need for manually assigning a configuration to every new LogScale Collector. Groups also allow you to combine multiple reusable configuration snippets.

      Additionally the management of instances has been simplified and merged into this new feature, and therefore the Assigned Instances page has been removed to favor use of the Group functions.

      For more information, see Manage Groups.

  • Queries

    • The worker-level prioritization of queries has been changed. The new prioritization will attempt to divide time evenly between all users, and divide the time given to each user evenly among that user's queries.

    • Live query cost metrics corrections:

      • livequeries-rate metric has changed from long to double.

      • livequeries-rate-canceled-due-to-digest-delay metric has changed from long to double.

      For more information, see Node-Level Metrics.

  • Functions

    • The new array:length() function has been introduced. It finds the length of an array by counting the number of array entries.

      For more information, see array:length().

Fixed in this release

  • UI Changes

    • When hovering over a query function in the query editor, the link to the function documentation now always points to the latest version of the page.

  • Automation and Alerts

    • After updating Scheduled searches where the action was failing, they would constantly fail with a None.get error until they were disabled and enabled again, or the LogScale cluster was restarted. This issue is now fixed.

  • Storage

    • Fixed an issue that could cause repositories undeleted using the mechanism described at Restoring a Repository or View to be only partially restored. Some deleted datasources within the repositories could erroneously be skipped during restoration.

      For more information, see Restoring a Repository or View.

  • Dashboards and Widgets

    • Users were prevented from exporting results of queries containing multi value parameters. This issue is now fixed.

  • Queries

    • Queries in some cases would be killed as if they were blocked even though they did not match the criteria of the block. This issue is now fixed.

    • Fixed a bug in which the second poll inside the cluster could be delayed by upwards of 10 seconds. This fix ensures that the time between polls will never be later than the start time of the query, this means that early polls will not be delayed too much, enabling faster query responses.

  • Functions

    • selectLast() has been fixed for an issue that could cause this query function to miss events in certain cases.

  • Other

    • It was not possible to create a new repository with a time retention greater than 365 days. Now, the UI limit is the one that is set on the customer organization.

      Input validation on fields when creating new repositories is now also improved.

Improvement

  • Storage

    • Allowed reassignment of digest that assigns partitions unevenly to hosts. This is to support clusters where hosts are not evenly sized, and so an even partition assignment is not expected.

  • Configuration

  • Ingestion

    • The cancelling mechanism for specific costly queries has been improved to solve cases where those queries got restarted anyway: the query with the exact match on the query string is now blocked for 5 minutes. This will free enough CPU for ingest to catch up and avoid blocking queries for too long.

Falcon LogScale 1.124.1 LTS (2024-02-29)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.124.1LTS2024-02-29

Cloud

2025-03-01No1.70.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.124.1/server-1.124.1.tar.gz

Bug fixes and updates.

Breaking Changes

The following items create a breaking change in the behavior, response or operation of this release.

  • Functions

    • The default accuracy of the percentile() function has been adjusted. This means that any query that does not explicitly set the accuracy may see a change in reported percentile. Specifically, the percentile() function may now deviate by up to one 100th of the true percentile, meaning that if a given percentile has a true value of 1000, percentile() may report a percentile in the range of [990; 1010].

      On the flip side, percentile() now uses less memory by default, which should allow for additional series or groups when this function is used with either timeChart() or groupBy() and the default accuracy is used.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • We aim to stop publishing the jar distribution of LogScale (e.g. server-1.117.jar) as of LogScale version 1.130.0.

      Users deploying via Docker images are not affected. Users deploying on bare metal should ensure they deploy the tar artifact, and not the jar artifact.

      A migration guide for bare metal deployments is available at How-To: Migrating from server.jar to Launcher Startup.

    • We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.

Removed

Items that have been removed as of this release.

GraphQL API

  • Removed the Asset interface type in GraphQL that Alert, Dashboard, Parser, SavedQuery and ViewInteraction datatypes implemented. It was not used as a type for any field. All fields from the Asset interface type are still present in the implementing types.

Configuration

  • The DEFAULT_PARTITION_COUNT configuration parameter has been removed, as it was unused by the system due to earlier changes to partition handling.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The assetType GraphQL field on Alert, Dashboard, Parser, SavedQuery and ViewInteraction datatypes has been deprecated and will be removed in version 1.136 of LogScale.

  • The humio Docker image is deprecated in favor of humio-core. humio is no longer considered suitable for production use, as it runs Kafka and ZooKeeper on the same host as LogScale, which our deployment guidelines no longer recommend. The final release of humio Docker image will be in version 1.130.0.

    The new humio-single-node-demo image is an all-in-one container suitable for quick and easy demonstration setups, but which is entirely unsupported for production use.

    For more information, see Installing Using Containers.

  • In the GraphQL API, the ChangeTriggersAndAction enum value for both the Permission and ViewAction enum is now deprecated and will be removed in version 1.136 of LogScale.

  • The QUERY_COORDINATOR environment variable is deprecated. To control whether a node should be allowed to be a query coordinator, use the query node task instead. Node tasks can be assigned and unassigned at runtime using the assignTasks() and unassignTasks() GraphQL mutations respectively, or controlled using the INITIAL_DISABLED_NODE_TASKS environment variable.

    For more information, see INITIAL_DISABLED_NODE_TASK.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • In the GraphQL API, the name argument to the parser field on the Repository datatype has been deprecated and will be removed in version 1.136 of LogScale.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Storage

    • We have adjusted the code that calculates where to start reading from the ingest queue to be more conservative. It will no longer allow for skipping past segments that are not fully replicated when later segments on the same datasource are fully replicated. This fixes a very rare edge case that could cause data loss on clusters using ephemeral disks. Due to the changed behavior, any segment failing to properly replicate will now cause LogScale to stop deleting data from the affected Kafka partition. Cluster administrators are strongly encouraged to monitor this case, by keeping under observation Kafka's disk usage.

Upgrades

Changes that may occur or be required during an upgrade.

  • Installation and Deployment

    • Kafka client library has been upgraded to 3.6.1. Some minor changes have been made to serializers used by LogScale to reduce memory copying.

New features and improvements

  • UI Changes

    • Time zone data has been updated to IANA 2023d.

    • Deletion of a file that is actively used by live queries will now stop those queries.

      For more information, see Exporting or Deleting a File.

    • Multi-Cluster Search — early adopter release for Self-hosted LogScale.

      • Keep the data close to the source, search from single UI

      • Search across multiple LogScale clusters in a single view

      • Support key functionalities like alerts & dashboards

      The functionality is limited to LogScale self-hosted versions at this point.

      For more information, see LogScale Multi-Cluster Search.

    • When Manage Users, it is now possible to filter users based also on their assigned roles (for example, type admin in the Users search field).

    • The Field Aliasing feature is introduced. Implementing Field Aliasing in your workflow simplifies data correlation from various sources. With this feature, users can give alternative names — aliases — to fields created at parse time, across a view, or the entire organization. It makes data interpretation more intuitive and provides analysts with a smoother search experience.

      For more information, see Field Aliasing.

  • Automation and Alerts

    • The following changes affects the UI for Standard Alerts:

      • A minimum time window of 1 minute is introduced, since anything smaller will not produce reliable results. Any existing standard alert with a time window smaller than 1 minute will not run, instead an error notification will be shown.

      • It is no longer possible to specify the time window and the throttle period in milliseconds. Any existing standard alerts with a time window or throttle period specified in milliseconds will have it rounded to the nearest second.

      • When saving the alert, the query window is automatically changed to the largest unit in the Relative Time Syntax that can represent it. For example 24h is changed to 1d and 60s is changed to 1m.

    • The ChangeTriggersAndActions permission is now replaced by two new permissions:

      • ChangeTriggers permission is needed to edit alerts or scheduled searches.

      • ChangeActions permission is needed to edit actions as well as viewing them. Viewing the name and type of actions when editing triggers is still possible without this permission.

      Any user with the legacy ChangeTriggersAndActions permissions will by default have both. It is possible to remove one of them for more granular access controls.

    • A slow-query logging has been added when an alert is slow to start due to the query not having finished the historical part.

  • GraphQL API

    • Added limits for GraphQL queries on the total number of selected fields and fragments. Defaults are 1000 for authenticated and 150 for unauthenticated users.

      Cluster administrators can adjust these limits with the GraphQLSelectionSizeLimit and UnauthenticatedGraphQLSelectionSizeLimit dynamic configurations.

  • Storage

  • Configuration

    • The meaning of S3_STORAGE_CONCURRENCY and GCP_STORAGE_CONCURRENCY configuration variables has slightly changed. The settings are used for throttling downloads and uploads for bucket storage. Previously, a setting of S3_STORAGE_CONCURRENCY=10 for example, meant that LogScale would allow 10 concurrent uploads, and 10 concurrent downloads. Now, it means that LogScale will allow a total of 10 transfers at a time, disregarding the transfer direction.

    • New dynamic configurations have been added:

    • Ingest rate monitoring for autosharding improved. For clusters with more than 10 nodes, only a subset of the nodes will be reporting their ingest rate for any given datasource, and the total rate for each datasource estimated based on that. The dynamic configuration TargetMaxRateForDatasource still sets the threshold for sharding; however, once the rate is exceeded, it is no longer needed to be twice the TargetMaxRateForDatasource configuration before shards are added.

  • Dashboards and Widgets

    • A series of improvements has been added to the dashboard layout experience:

      • New widgets will be added in the topmost available space

      • When you drag widgets up, all widgets in the same column will move together

      • Improved experience when swapping the order of widgets (horizontally or vertically)

  • Ingestion

    • Introducing Ingest Feeds, a new pull-based ingest source that ingests logs stored in AWS S3. The files within the AWS S3 bucket can be Gzip compressed and we currently support newline delimited files and the JSON object format in which CloudTrail logs are stored in. Ingest Feeds require some configuration setup on the AWS side to get started.

      This feature is part of a gradual rollout process and may not be available on your cloud instance, but will be available to all customers in the following weeks.

      For more information, see Ingest Data from AWS S3.

    • The limits on the size of parser test cases when exporting as templates or packages has been increased.

    • The amount of logging produced by DigestLeadershipLoggerJob has been reduced in clusters with many ingest queue partitions.

  • Log Collector

    • Groups have been added to Fleet Management for the LogScale Collector. This feature makes it possible to define dynamic groups using a filter based upon a subset of the LogScale Query Language Syntax. New Collectors enrolled into the fleet will automatically be configured based upon the groups filters they match, eliminating the need for manually assigning a configuration to every new LogScale Collector. Groups also allow you to combine multiple reusable configuration snippets.

      Additionally the management of instances has been simplified and merged into this new feature, and therefore the Assigned Instances page has been removed to favor use of the Group functions.

      For more information, see Manage Groups.

  • Queries

    • The worker-level prioritization of queries has been changed. The new prioritization will attempt to divide time evenly between all users, and divide the time given to each user evenly among that user's queries.

    • Live query cost metrics corrections:

      • livequeries-rate metric has changed from long to double.

      • livequeries-rate-canceled-due-to-digest-delay metric has changed from long to double.

      For more information, see Node-Level Metrics.

  • Functions

    • The new array:length() function has been introduced. It finds the length of an array by counting the number of array entries.

      For more information, see array:length().

Fixed in this release

  • UI Changes

    • When hovering over a query function in the query editor, the link to the function documentation now always points to the latest version of the page.

  • Automation and Alerts

    • After updating Scheduled searches where the action was failing, they would constantly fail with a None.get error until they were disabled and enabled again, or the LogScale cluster was restarted. This issue is now fixed.

  • Storage

    • Fixed an issue that could cause repositories undeleted using the mechanism described at Restoring a Repository or View to be only partially restored. Some deleted datasources within the repositories could erroneously be skipped during restoration.

      For more information, see Restoring a Repository or View.

  • Dashboards and Widgets

    • Users were prevented from exporting results of queries containing multi value parameters. This issue is now fixed.

  • Queries

    • Queries in some cases would be killed as if they were blocked even though they did not match the criteria of the block. This issue is now fixed.

  • Functions

    • selectLast() has been fixed for an issue that could cause this query function to miss events in certain cases.

  • Other

    • It was not possible to create a new repository with a time retention greater than 365 days. Now, the UI limit is the one that is set on the customer organization.

      Input validation on fields when creating new repositories is now also improved.

Improvement

  • Storage

    • Allowed reassignment of digest that assigns partitions unevenly to hosts. This is to support clusters where hosts are not evenly sized, and so an even partition assignment is not expected.

  • Configuration

  • Ingestion

    • The cancelling mechanism for specific costly queries has been improved to solve cases where those queries got restarted anyway: the query with the exact match on the query string is now blocked for 5 minutes. This will free enough CPU for ingest to catch up and avoid blocking queries for too long.

Falcon LogScale 1.124.0 GA (2024-02-06)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.124.0GA2024-02-06

Cloud

2025-03-01No1.70.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • We aim to stop publishing the jar distribution of LogScale (e.g. server-1.117.jar) as of LogScale version 1.130.0.

      Users deploying via Docker images are not affected. Users deploying on bare metal should ensure they deploy the tar artifact, and not the jar artifact.

      A migration guide for bare metal deployments is available at How-To: Migrating from server.jar to Launcher Startup.

    • We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The assetType GraphQL field on Alert, Dashboard, Parser, SavedQuery and ViewInteraction datatypes has been deprecated and will be removed in version 1.136 of LogScale.

  • The humio Docker image is deprecated in favor of humio-core. humio is no longer considered suitable for production use, as it runs Kafka and ZooKeeper on the same host as LogScale, which our deployment guidelines no longer recommend. The final release of humio Docker image will be in version 1.130.0.

    The new humio-single-node-demo image is an all-in-one container suitable for quick and easy demonstration setups, but which is entirely unsupported for production use.

    For more information, see Installing Using Containers.

  • In the GraphQL API, the ChangeTriggersAndAction enum value for both the Permission and ViewAction enum is now deprecated and will be removed in version 1.136 of LogScale.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • In the GraphQL API, the name argument to the parser field on the Repository datatype has been deprecated and will be removed in version 1.136 of LogScale.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Storage

    • We have adjusted the code that calculates where to start reading from the ingest queue to be more conservative. It will no longer allow for skipping past segments that are not fully replicated when later segments on the same datasource are fully replicated. This fixes a very rare edge case that could cause data loss on clusters using ephemeral disks. Due to the changed behavior, any segment failing to properly replicate will now cause LogScale to stop deleting data from the affected Kafka partition. Cluster administrators are strongly encouraged to monitor this case, by keeping under observation Kafka's disk usage.

New features and improvements

  • UI Changes

    • Multi-Cluster Search — early adopter release for Self-hosted LogScale.

      • Keep the data close to the source, search from single UI

      • Search across multiple LogScale clusters in a single view

      • Support key functionalities like alerts & dashboards

      The functionality is limited to LogScale self-hosted versions at this point.

      For more information, see LogScale Multi-Cluster Search.

    • The Field Aliasing feature is introduced. Implementing Field Aliasing in your workflow simplifies data correlation from various sources. With this feature, users can give alternative names — aliases — to fields created at parse time, across a view, or the entire organization. It makes data interpretation more intuitive and provides analysts with a smoother search experience.

      For more information, see Field Aliasing.

Fixed in this release

  • Storage

    • Fixed an issue that could cause repositories undeleted using the mechanism described at Restoring a Repository or View to be only partially restored. Some deleted datasources within the repositories could erroneously be skipped during restoration.

      For more information, see Restoring a Repository or View.

Improvement

Falcon LogScale 1.123.0 GA (2024-01-30)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.123.0GA2024-01-30

Cloud

2025-03-01No1.70.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • We aim to stop publishing the jar distribution of LogScale (e.g. server-1.117.jar) as of LogScale version 1.130.0.

      Users deploying via Docker images are not affected. Users deploying on bare metal should ensure they deploy the tar artifact, and not the jar artifact.

      A migration guide for bare metal deployments is available at How-To: Migrating from server.jar to Launcher Startup.

    • We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The assetType GraphQL field on Alert, Dashboard, Parser, SavedQuery and ViewInteraction datatypes has been deprecated and will be removed in version 1.136 of LogScale.

  • The humio Docker image is deprecated in favor of humio-core. humio is no longer considered suitable for production use, as it runs Kafka and ZooKeeper on the same host as LogScale, which our deployment guidelines no longer recommend. The final release of humio Docker image will be in version 1.130.0.

    The new humio-single-node-demo image is an all-in-one container suitable for quick and easy demonstration setups, but which is entirely unsupported for production use.

    For more information, see Installing Using Containers.

  • In the GraphQL API, the ChangeTriggersAndAction enum value for both the Permission and ViewAction enum is now deprecated and will be removed in version 1.136 of LogScale.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • In the GraphQL API, the name argument to the parser field on the Repository datatype has been deprecated and will be removed in version 1.136 of LogScale.

New features and improvements

  • UI Changes

    • When Manage Users, it is now possible to filter users based also on their assigned roles (for example, type admin in the Users search field).

  • Automation and Alerts

    • A slow-query logging has been added when an alert is slow to start due to the query not having finished the historical part.

  • Storage

    • We have changed how LogScale handles being temporarily bottlenecked by bucket storage. Uploads are now prioritized ahead of downloads, which reduces the impact on ingest work.

  • Configuration

    • The meaning of S3_STORAGE_CONCURRENCY and GCP_STORAGE_CONCURRENCY configuration variables has slightly changed. The settings are used for throttling downloads and uploads for bucket storage. Previously, a setting of S3_STORAGE_CONCURRENCY=10 for example, meant that LogScale would allow 10 concurrent uploads, and 10 concurrent downloads. Now, it means that LogScale will allow a total of 10 transfers at a time, disregarding the transfer direction.

  • Log Collector

    • Groups have been added to Fleet Management for the LogScale Collector. This feature makes it possible to define dynamic groups using a filter based upon a subset of the LogScale Query Language Syntax. New Collectors enrolled into the fleet will automatically be configured based upon the groups filters they match, eliminating the need for manually assigning a configuration to every new LogScale Collector. Groups also allow you to combine multiple reusable configuration snippets.

      Additionally the management of instances has been simplified and merged into this new feature, and therefore the Assigned Instances page has been removed to favor use of the Group functions.

      For more information, see Manage Groups.

Fixed in this release

  • Automation and Alerts

    • After updating Scheduled searches where the action was failing, they would constantly fail with a None.get error until they were disabled and enabled again, or the LogScale cluster was restarted. This issue is now fixed.

  • Queries

    • Queries in some cases would be killed as if they were blocked even though they did not match the criteria of the block. This issue is now fixed.

  • Other

    • It was not possible to create a new repository with a time retention greater than 365 days. Now, the UI limit is the one that is set on the customer organization.

      Input validation on fields when creating new repositories is now also improved.

Improvement

  • Ingestion

    • The cancelling mechanism for specific costly queries has been improved to solve cases where those queries got restarted anyway: the query with the exact match on the query string is now blocked for 5 minutes. This will free enough CPU for ingest to catch up and avoid blocking queries for too long.

Falcon LogScale 1.122.0 GA (2024-01-23)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.122.0GA2024-01-23

Cloud

2025-03-01No1.70.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • We aim to stop publishing the jar distribution of LogScale (e.g. server-1.117.jar) as of LogScale version 1.130.0.

      Users deploying via Docker images are not affected. Users deploying on bare metal should ensure they deploy the tar artifact, and not the jar artifact.

      A migration guide for bare metal deployments is available at How-To: Migrating from server.jar to Launcher Startup.

    • We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The assetType GraphQL field on Alert, Dashboard, Parser, SavedQuery and ViewInteraction datatypes has been deprecated and will be removed in version 1.136 of LogScale.

  • The humio Docker image is deprecated in favor of humio-core. humio is no longer considered suitable for production use, as it runs Kafka and ZooKeeper on the same host as LogScale, which our deployment guidelines no longer recommend. The final release of humio Docker image will be in version 1.130.0.

    The new humio-single-node-demo image is an all-in-one container suitable for quick and easy demonstration setups, but which is entirely unsupported for production use.

    For more information, see Installing Using Containers.

  • In the GraphQL API, the ChangeTriggersAndAction enum value for both the Permission and ViewAction enum is now deprecated and will be removed in version 1.136 of LogScale.

  • In the GraphQL API, the name argument to the parser field on the Repository datatype has been deprecated and will be removed in version 1.136 of LogScale.

New features and improvements

  • UI Changes

    • Time zone data has been updated to IANA 2023d.

    • Deletion of a file that is actively used by live queries will now stop those queries.

      For more information, see Exporting or Deleting a File.

  • Automation and Alerts

    • The following changes affects the UI for Standard Alerts:

      • A minimum time window of 1 minute is introduced, since anything smaller will not produce reliable results. Any existing standard alert with a time window smaller than 1 minute will not run, instead an error notification will be shown.

      • It is no longer possible to specify the time window and the throttle period in milliseconds. Any existing standard alerts with a time window or throttle period specified in milliseconds will have it rounded to the nearest second.

      • When saving the alert, the query window is automatically changed to the largest unit in the Relative Time Syntax that can represent it. For example 24h is changed to 1d and 60s is changed to 1m.

  • Configuration

  • Dashboards and Widgets

    • A series of improvements has been added to the dashboard layout experience:

      • New widgets will be added in the topmost available space

      • When you drag widgets up, all widgets in the same column will move together

      • Improved experience when swapping the order of widgets (horizontally or vertically)

  • Queries

    • Live query cost metrics corrections:

      • livequeries-rate metric has changed from long to double.

      • livequeries-rate-canceled-due-to-digest-delay metric has changed from long to double.

      For more information, see Node-Level Metrics.

Falcon LogScale 1.121.0 GA (2024-01-16)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.121.0GA2024-01-16

Cloud

2025-03-01No1.70.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • We aim to stop publishing the jar distribution of LogScale (e.g. server-1.117.jar) as of LogScale version 1.130.0.

      Users deploying via Docker images are not affected. Users deploying on bare metal should ensure they deploy the tar artifact, and not the jar artifact.

      A migration guide for bare metal deployments is available at How-To: Migrating from server.jar to Launcher Startup.

    • We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.

Removed

Items that have been removed as of this release.

Configuration

  • The DEFAULT_PARTITION_COUNT configuration parameter has been removed, as it was unused by the system due to earlier changes to partition handling.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The assetType GraphQL field on Alert, Dashboard, Parser, SavedQuery and ViewInteraction datatypes has been deprecated and will be removed in version 1.136 of LogScale.

  • In the GraphQL API, the ChangeTriggersAndAction enum value for both the Permission and ViewAction enum is now deprecated and will be removed in version 1.136 of LogScale.

  • In the GraphQL API, the name argument to the parser field on the Repository datatype has been deprecated and will be removed in version 1.136 of LogScale.

New features and improvements

  • GraphQL API

    • Added limits for GraphQL queries on the total number of selected fields and fragments. Defaults are 1000 for authenticated and 150 for unauthenticated users.

      Cluster administrators can adjust these limits with the GraphQLSelectionSizeLimit and UnauthenticatedGraphQLSelectionSizeLimit dynamic configurations.

  • Ingestion

    • The amount of logging produced by DigestLeadershipLoggerJob has been reduced in clusters with many ingest queue partitions.

  • Functions

    • The new array:length() function has been introduced. It finds the length of an array by counting the number of array entries.

      For more information, see array:length().

Fixed in this release

  • UI Changes

    • When hovering over a query function in the query editor, the link to the function documentation now always points to the latest version of the page.

Falcon LogScale 1.120.0 GA (2024-01-09)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.120.0GA2024-01-09

Cloud

2025-03-01No1.70.0No

Available for download two days after release.

Bug fixes and updates.

Breaking Changes

The following items create a breaking change in the behavior, response or operation of this release.

  • Functions

    • The default accuracy of the percentile() function has been adjusted. This means that any query that does not explicitly set the accuracy may see a change in reported percentile. Specifically, the percentile() function may now deviate by up to one 100th of the true percentile, meaning that if a given percentile has a true value of 1000, percentile() may report a percentile in the range of [990; 1010].

      On the flip side, percentile() now uses less memory by default, which should allow for additional series or groups when this function is used with either timeChart() or groupBy() and the default accuracy is used.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • We aim to stop publishing the jar distribution of LogScale (e.g. server-1.117.jar) as of LogScale version 1.130.0.

      Users deploying via Docker images are not affected. Users deploying on bare metal should ensure they deploy the tar artifact, and not the jar artifact.

      A migration guide for bare metal deployments is available at How-To: Migrating from server.jar to Launcher Startup.

    • We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The assetType GraphQL field on Alert, Dashboard, Parser, SavedQuery and ViewInteraction datatypes has been deprecated and will be removed in version 1.136 of LogScale.

  • In the GraphQL API, the ChangeTriggersAndAction enum value for both the Permission and ViewAction enum is now deprecated and will be removed in version 1.136 of LogScale.

  • In the GraphQL API, the name argument to the parser field on the Repository datatype has been deprecated and will be removed in version 1.136 of LogScale.

Upgrades

Changes that may occur or be required during an upgrade.

  • Installation and Deployment

    • Kafka client library has been upgraded to 3.6.1. Some minor changes have been made to serializers used by LogScale to reduce memory copying.

New features and improvements

  • Automation and Alerts

    • The ChangeTriggersAndActions permission is now replaced by two new permissions:

      • ChangeTriggers permission is needed to edit alerts or scheduled searches.

      • ChangeActions permission is needed to edit actions as well as viewing them. Viewing the name and type of actions when editing triggers is still possible without this permission.

      Any user with the legacy ChangeTriggersAndActions permissions will by default have both. It is possible to remove one of them for more granular access controls.

  • Storage

  • Ingestion

    • Introducing Ingest Feeds, a new pull-based ingest source that ingests logs stored in AWS S3. The files within the AWS S3 bucket can be Gzip compressed and we currently support newline delimited files and the JSON object format in which CloudTrail logs are stored in. Ingest Feeds require some configuration setup on the AWS side to get started.

      This feature is part of a gradual rollout process and may not be available on your cloud instance, but will be available to all customers in the following weeks.

      For more information, see Ingest Data from AWS S3.

Fixed in this release

  • Dashboards and Widgets

    • Users were prevented from exporting results of queries containing multi value parameters. This issue is now fixed.

  • Functions

    • selectLast() has been fixed for an issue that could cause this query function to miss events in certain cases.

Falcon LogScale 1.119.0 GA (2023-12-19)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.119.0GA2023-12-19

Cloud

2025-03-01No1.70.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.

Removed

Items that have been removed as of this release.

GraphQL API

  • Removed the Asset interface type in GraphQL that Alert, Dashboard, Parser, SavedQuery and ViewInteraction datatypes implemented. It was not used as a type for any field. All fields from the Asset interface type are still present in the implementing types.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The assetType GraphQL field on Alert, Dashboard, Parser, SavedQuery and ViewInteraction datatypes has been deprecated and will be removed in version 1.136 of LogScale.

  • The QUERY_COORDINATOR environment variable is deprecated. To control whether a node should be allowed to be a query coordinator, use the query node task instead. Node tasks can be assigned and unassigned at runtime using the assignTasks() and unassignTasks() GraphQL mutations respectively, or controlled using the INITIAL_DISABLED_NODE_TASKS environment variable.

    For more information, see INITIAL_DISABLED_NODE_TASK.

New features and improvements

  • Ingestion

    • The limits on the size of parser test cases when exporting as templates or packages has been increased.

  • Queries

    • The worker-level prioritization of queries has been changed. The new prioritization will attempt to divide time evenly between all users, and divide the time given to each user evenly among that user's queries.

Falcon LogScale 1.118.4 LTS (2024-02-23)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.118.4LTS2024-02-23

Cloud

2025-01-31No1.70.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.118.4/server-1.118.4.tar.gz

These notes include entries from the following previous releases: 1.118.2, 1.118.3

Bug fixes and performance improvements.

Breaking Changes

The following items create a breaking change in the behavior, response or operation of this release.

  • Functions

    • The new parameter unit is added to formatTime() to specify whether the input field is in seconds or milliseconds, or if it should be auto-detected by the system.

      This is a breaking change: if you want to ensure fully backward-compatible behavior, set unit=milliseconds.

      For more information, see formatTime().

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.

Removed

Items that have been removed as of this release.

API

  • The deprecated REST endpoints api/v1/dataspaces/(id)/deleteevents and /api/v1/repositories/(id)/deleteevents have been removed. You can use the redactEvents GraphQL mutation and query instead.

    For more information, see redactEvents() .

Deprecation

Items that have been deprecated and may be removed in a future release.

  • GraphQL mutation updateOrganizationMutability is deprecated in favor of the new setBlockIngest mutation.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Automation and Alerts

    • We have changed how Scheduled Searches handle query warnings, similar to what was done for Standard Alerts (see Falcon LogScale 1.112.0 GA (2023-10-24)). Previously, LogScale only triggered Scheduled Searches if there were no query warnings. Now, scheduled searches will trigger despite most query warnings, and the scheduled search status will show a warning instead of an error.

      For query warnings about missing data, either due to ingest delay or some existing data that is currently unavailable, the scheduled search will retry for up to 10 minutes by default. This waiting time is configurable, see SCHEDULED_SEARCH_MAX_WAIT_FOR_MISSING_DATA for more information.

      Up until now, all query warnings were treated as errors: the scheduled search did not trigger even though it produced results, and the scheduled search was shown with an error in LogScale. Most query warnings meant that not all data was queried. The previous behaviour prevented the scheduled search from triggering in cases where it would not have, if all data had been available. For instance, a scheduled search that would trigger if a count of events dropped below a threshold. On the other hand, it made some scheduled searches not trigger, even though they would still have if all data was available. That meant that previously you would almost never have a scheduled search trigger when it should not, but you would sometimes have a scheduled search not trigger, when it should have. We have reverted this behavior.

      With this change, we no longer recommend to set the configuration option SCHEDULED_SEARCH_DESPITE_WARNINGS to true, since it treats all query warnings as non-errors, and there are a few query warnings that should make the scheduled search fail.

Upgrades

Changes that may occur or be required during an upgrade.

  • Configuration

    • We've migrated from Akka dependency component to Apache Pekko. This means that all internal logs referencing Akka will be substituted with the Pekko counterpart. Users will need to update any triggers or dashboards that rely on such logs.

      On Prem only: be aware that the Akka to Pekko migration also affects configuration field names in application.conf. Clusters that are using a custom application.conf will need to update their configuration to use the Pekko configuration names instead of the Akka configuration names.

New features and improvements

  • UI Changes

    • The Files page has a new layout and changes:

      • It has been split into two pages: one containing a list of files and one with details of each file.

      • A view limit of 100 MB has been added and you'll get an error in the UI if you try to view files larger than this size.

      • It displays information on the size limits and the step needed for syncing the imported files.

      For more information, see Files.

    • Parser test cases will automatically expand to the height of their content when loading the parser page now.

    • When selecting a parser test case, there is now a button to scroll to that test case again if you scroll away from it.

    • We have improved the navigation on the page for Alerts, Scheduled Searches and Actions and the page is now called Automation.

      For more information, see Automation.

    • Lookup Files require unique column headers to work as expected, which was previously validated when attempting to use the file. You could still install an invalid file into LogScale however, but now lookup files with duplicate header names are also blocked from being installed.

  • Automation and Alerts

    • LogScale now creates notifications for alerts and scheduled searches with warnings in addition to notifications for errors. The notifications for warnings will have a severity of warning.

    • When Filter Alerts encounter a query warning that could potentially affect the result of the alert, the warning is now saved with the alert, so that it is visible in the alerts overview, same as for Standard Alerts.

    • When clearing errors on alerts or scheduled searches, all notifications about the problem are now automatically deleted right when the error is cleared. Previously, notifications were only updated every 15 minutes. Note, that if the error returns, a new notification will be created.

  • GraphQL API

    • The redactEvents() mutation will no longer be allowed for users who have a limiting query prefix.

    • Added limits for GraphQL queries on the total number of selected fields and fragments. Defaults are 1000 for authenticated and 150 for unauthenticated users.

      Cluster administrators can adjust these limits with the GraphQLSelectionSizeLimit and UnauthenticatedGraphQLSelectionSizeLimit dynamic configurations.

    • The new setBlockIngest GraphQL mutation is introduced to block ingest for the organization and set ingest to paused in the dataspaces owned by the organization.

  • Storage

    • Handling of IOExceptions in part of the segment reading code has been improved. Such exceptions will cause the segment to be excluded from the query, and potentially refetched from bucket storage, and a warning to be shown to the user, rather than cancelling the query.

  • Configuration

    • Added validation for LOCAL_STORAGE_PERCENTAGE configuration against the targetDiskUsagePercentage, that might be set on runtime, to enforce that the LOCAL_STORAGE_PERCENTAGE variable is at least 5 percentage points larger than targetDiskUsagePercentage. Nodes that are violating this constraint will not be able to start. In addition, the setTargetDiskUsagePercentage mutation will not allow violating the constraint.

    • QueryMemoryLimit and LiveQueryMemoryLimit dynamic configurations have been replaced with QueryCoordinatorMemoryLimit, which controls the maximum memory usage of the coordinating node. This memory limit will, in turn, determine the limits of the static query state size and the live query state size. QueryCoordinatorMemoryLimit defaults to 400MB; QueryMemoryLimit and LiveQueryMemoryLimit defaults to 100MB regardless of their previous configuration.

      For more information, see General Limits & Parameters.

    • The new INITIAL_DISABLED_NODE_TASK environment variable is introduced.

      For more information, see INITIAL_DISABLED_NODE_TASK.

  • Dashboards and Widgets

    • Small multiples functionality is introduced for the Single Value, Gauge, and Pie Chart widgets. This feature allows you to partition your query result on a single dimension into multiple visuals of the same widget type for easy comparison.

      For more information, see Widgets.

    • We have added the new width option Fit to content for Event List columns. With this option selected, the width of the column depends on the content in the column.

    • Show thousands separator has been added as a configuration option of format Number for the Table widget.

  • Ingestion

    • When navigating between parser test cases, the table showing the outputs for the test case will now scroll to the top when you select a new test case.

    • A new mechanism is introduced that delays the response to a HTTP ingest request from nodes that also do digest when the digest node locally experiences digest lag. The following new dynamic configurations control this mechanism:

      • DelayIngestResponseDueToIngestLagMaxFactor limits how much longer than the actual execution it may be, measured as a factor on top of the actual time spent (default is 2).

      • DelayIngestResponseDueToIngestLagThreshold sets the number of milliseconds of digest lag where the feature starts to kick in (default is 20,000).

      • DelayIngestResponseDueToIngestLagScale sets the number of milliseconds of lag that adds 1 to the factor applied (default is 300,000).

    • The amount of logging produced by DigestLeadershipLoggerJob has been reduced in clusters with many ingest queue partitions.

  • Functions

    • The new query function duration() is introduced: it can be helpful in computations involving timestamps.

    • Live queries that use files in either match(), cidr(), or lookup() functions are no longer restarted when the file is updated. Instead the files are swapped while the queries are still running.

      For more information, see Lookup Files Operations.

    • The new query function parseUri() is introduced to support parsing of URIs without a scheme.

    • The new query function if() is introduced to compute one of two expressions depending on the outcome of a test.

Fixed in this release

  • UI Changes

    • Turned the dropdown menu in the TablePage upwards and set it to the front to fix a bug where the menu would be hidden.

    • The page for creating repository or view tokens would fail to load if the user didn't have a Change IP filters Organization settings permission.

  • Automation and Alerts

    • If a filter alert, standard alert or scheduled search was assigned to run on another node in the cluster, due to changes to the available cluster nodes, they would be wrongly marked as failing with an error like The alert is broken. Save the alert again to fix it and an error log. This issue is now fixed.

    • If an error occurred where the error message was huge, the error would not be stored on the failing alert or scheduled search. This issue has been fixed.

  • GraphQL API

    • Swapped parameters in GraphQL mutation updateOrganizationMutability have been fixed.

  • Storage

  • Dashboards and Widgets

    • The Gauge widget has been fixed as the Styling panel would not display configured thresholds.

    • The hovered series in Time Chart widget have been fixed as they would not be highlighted in the tooltip.

    • Users were prevented from exporting results of queries containing multi value parameters. This issue is now fixed.

    • The options for precision and thousands separators in Table widget have been fixed as they would not be saved correctly when editing other widgets on the Search page.

    • The legend title in widget charts has been fixed as it would offset the content when positioned to the right.

    • The Styling panel in the Table widget has been fixed as threshold coloring could be assigned unintentionally.

  • Ingestion

    • Parser timeout errors on ingested events that would occur at shutdown have now been fixed.

    • A gap in the statistics of ingest per day experienced by some organizations on the Usage Page and in humio-usage repository, causing the graph to drop to zero, has now been fixed. As a consequence of this fix, the first measurement performed with version 1.114 will result in the graph showing a peak, since it would include statistics from the period where calculations were skipped.

    • A parser that failed to construct would sometimes result in events receiving a null error. This issue has been fixed.

    • A digest coordination issue has been fixed: it could cause mini-segments to stay behind on old digest leaders when leadership changes.

  • Queries

    • Occasional error logging from QueryScheduler.reduceAndSetSnapshot has been fixed.

  • Functions

    • cidr() query function would fail to find some events when parameter negate=true was set. This incorrect behavior has now been fixed.

    • The cidr() function would handle a validation error incorrectly. This issue has been fixed.

    • The count() function with distinct parameter would give an incorrect count for utf8 strings. This issue has been fixed.

    • timeChart() and bucket() functions have been fixed as they would give slightly different results depending on whether their limit argument was left out or explicitly set to the default value.

Improvement

  • Storage

    • Allowed reassignment of digest that assigns partitions unevenly to hosts. This is to support clusters where hosts are not evenly sized, and so an even partition assignment is not expected.

Falcon LogScale 1.118.3 LTS (2024-02-06)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.118.3LTS2024-02-06

Cloud

2025-01-31No1.70.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.118.3/server-1.118.3.tar.gz

These notes include entries from the following previous releases: 1.118.2

Bug fixes and updates.

Breaking Changes

The following items create a breaking change in the behavior, response or operation of this release.

  • Functions

    • The new parameter unit is added to formatTime() to specify whether the input field is in seconds or milliseconds, or if it should be auto-detected by the system.

      This is a breaking change: if you want to ensure fully backward-compatible behavior, set unit=milliseconds.

      For more information, see formatTime().

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.

Removed

Items that have been removed as of this release.

API

  • The deprecated REST endpoints api/v1/dataspaces/(id)/deleteevents and /api/v1/repositories/(id)/deleteevents have been removed. You can use the redactEvents GraphQL mutation and query instead.

    For more information, see redactEvents() .

Deprecation

Items that have been deprecated and may be removed in a future release.

  • GraphQL mutation updateOrganizationMutability is deprecated in favor of the new setBlockIngest mutation.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Automation and Alerts

    • We have changed how Scheduled Searches handle query warnings, similar to what was done for Standard Alerts (see Falcon LogScale 1.112.0 GA (2023-10-24)). Previously, LogScale only triggered Scheduled Searches if there were no query warnings. Now, scheduled searches will trigger despite most query warnings, and the scheduled search status will show a warning instead of an error.

      For query warnings about missing data, either due to ingest delay or some existing data that is currently unavailable, the scheduled search will retry for up to 10 minutes by default. This waiting time is configurable, see SCHEDULED_SEARCH_MAX_WAIT_FOR_MISSING_DATA for more information.

      Up until now, all query warnings were treated as errors: the scheduled search did not trigger even though it produced results, and the scheduled search was shown with an error in LogScale. Most query warnings meant that not all data was queried. The previous behaviour prevented the scheduled search from triggering in cases where it would not have, if all data had been available. For instance, a scheduled search that would trigger if a count of events dropped below a threshold. On the other hand, it made some scheduled searches not trigger, even though they would still have if all data was available. That meant that previously you would almost never have a scheduled search trigger when it should not, but you would sometimes have a scheduled search not trigger, when it should have. We have reverted this behavior.

      With this change, we no longer recommend to set the configuration option SCHEDULED_SEARCH_DESPITE_WARNINGS to true, since it treats all query warnings as non-errors, and there are a few query warnings that should make the scheduled search fail.

Upgrades

Changes that may occur or be required during an upgrade.

  • Configuration

    • We've migrated from Akka dependency component to Apache Pekko. This means that all internal logs referencing Akka will be substituted with the Pekko counterpart. Users will need to update any triggers or dashboards that rely on such logs.

      On Prem only: be aware that the Akka to Pekko migration also affects configuration field names in application.conf. Clusters that are using a custom application.conf will need to update their configuration to use the Pekko configuration names instead of the Akka configuration names.

New features and improvements

  • UI Changes

    • The Files page has a new layout and changes:

      • It has been split into two pages: one containing a list of files and one with details of each file.

      • A view limit of 100 MB has been added and you'll get an error in the UI if you try to view files larger than this size.

      • It displays information on the size limits and the step needed for syncing the imported files.

      For more information, see Files.

    • Parser test cases will automatically expand to the height of their content when loading the parser page now.

    • When selecting a parser test case, there is now a button to scroll to that test case again if you scroll away from it.

    • We have improved the navigation on the page for Alerts, Scheduled Searches and Actions and the page is now called Automation.

      For more information, see Automation.

    • Lookup Files require unique column headers to work as expected, which was previously validated when attempting to use the file. You could still install an invalid file into LogScale however, but now lookup files with duplicate header names are also blocked from being installed.

  • Automation and Alerts

    • LogScale now creates notifications for alerts and scheduled searches with warnings in addition to notifications for errors. The notifications for warnings will have a severity of warning.

    • When Filter Alerts encounter a query warning that could potentially affect the result of the alert, the warning is now saved with the alert, so that it is visible in the alerts overview, same as for Standard Alerts.

    • When clearing errors on alerts or scheduled searches, all notifications about the problem are now automatically deleted right when the error is cleared. Previously, notifications were only updated every 15 minutes. Note, that if the error returns, a new notification will be created.

  • GraphQL API

    • The redactEvents() mutation will no longer be allowed for users who have a limiting query prefix.

    • Added limits for GraphQL queries on the total number of selected fields and fragments. Defaults are 1000 for authenticated and 150 for unauthenticated users.

      Cluster administrators can adjust these limits with the GraphQLSelectionSizeLimit and UnauthenticatedGraphQLSelectionSizeLimit dynamic configurations.

    • The new setBlockIngest GraphQL mutation is introduced to block ingest for the organization and set ingest to paused in the dataspaces owned by the organization.

  • Storage

    • Handling of IOExceptions in part of the segment reading code has been improved. Such exceptions will cause the segment to be excluded from the query, and potentially refetched from bucket storage, and a warning to be shown to the user, rather than cancelling the query.

  • Configuration

    • Added validation for LOCAL_STORAGE_PERCENTAGE configuration against the targetDiskUsagePercentage, that might be set on runtime, to enforce that the LOCAL_STORAGE_PERCENTAGE variable is at least 5 percentage points larger than targetDiskUsagePercentage. Nodes that are violating this constraint will not be able to start. In addition, the setTargetDiskUsagePercentage mutation will not allow violating the constraint.

    • QueryMemoryLimit and LiveQueryMemoryLimit dynamic configurations have been replaced with QueryCoordinatorMemoryLimit, which controls the maximum memory usage of the coordinating node. This memory limit will, in turn, determine the limits of the static query state size and the live query state size. QueryCoordinatorMemoryLimit defaults to 400MB; QueryMemoryLimit and LiveQueryMemoryLimit defaults to 100MB regardless of their previous configuration.

      For more information, see General Limits & Parameters.

    • The new INITIAL_DISABLED_NODE_TASK environment variable is introduced.

      For more information, see INITIAL_DISABLED_NODE_TASK.

  • Dashboards and Widgets

    • Small multiples functionality is introduced for the Single Value, Gauge, and Pie Chart widgets. This feature allows you to partition your query result on a single dimension into multiple visuals of the same widget type for easy comparison.

      For more information, see Widgets.

    • We have added the new width option Fit to content for Event List columns. With this option selected, the width of the column depends on the content in the column.

    • Show thousands separator has been added as a configuration option of format Number for the Table widget.

  • Ingestion

    • When navigating between parser test cases, the table showing the outputs for the test case will now scroll to the top when you select a new test case.

    • A new mechanism is introduced that delays the response to a HTTP ingest request from nodes that also do digest when the digest node locally experiences digest lag. The following new dynamic configurations control this mechanism:

      • DelayIngestResponseDueToIngestLagMaxFactor limits how much longer than the actual execution it may be, measured as a factor on top of the actual time spent (default is 2).

      • DelayIngestResponseDueToIngestLagThreshold sets the number of milliseconds of digest lag where the feature starts to kick in (default is 20,000).

      • DelayIngestResponseDueToIngestLagScale sets the number of milliseconds of lag that adds 1 to the factor applied (default is 300,000).

    • The amount of logging produced by DigestLeadershipLoggerJob has been reduced in clusters with many ingest queue partitions.

  • Functions

    • The new query function duration() is introduced: it can be helpful in computations involving timestamps.

    • Live queries that use files in either match(), cidr(), or lookup() functions are no longer restarted when the file is updated. Instead the files are swapped while the queries are still running.

      For more information, see Lookup Files Operations.

    • The new query function parseUri() is introduced to support parsing of URIs without a scheme.

    • The new query function if() is introduced to compute one of two expressions depending on the outcome of a test.

Fixed in this release

  • UI Changes

    • Turned the dropdown menu in the TablePage upwards and set it to the front to fix a bug where the menu would be hidden.

    • The page for creating repository or view tokens would fail to load if the user didn't have a Change IP filters Organization settings permission.

  • Automation and Alerts

    • If a filter alert, standard alert or scheduled search was assigned to run on another node in the cluster, due to changes to the available cluster nodes, they would be wrongly marked as failing with an error like The alert is broken. Save the alert again to fix it and an error log. This issue is now fixed.

    • If an error occurred where the error message was huge, the error would not be stored on the failing alert or scheduled search. This issue has been fixed.

  • GraphQL API

    • Swapped parameters in GraphQL mutation updateOrganizationMutability have been fixed.

  • Storage

  • Dashboards and Widgets

    • The Gauge widget has been fixed as the Styling panel would not display configured thresholds.

    • The hovered series in Time Chart widget have been fixed as they would not be highlighted in the tooltip.

    • Users were prevented from exporting results of queries containing multi value parameters. This issue is now fixed.

    • The options for precision and thousands separators in Table widget have been fixed as they would not be saved correctly when editing other widgets on the Search page.

    • The legend title in widget charts has been fixed as it would offset the content when positioned to the right.

    • The Styling panel in the Table widget has been fixed as threshold coloring could be assigned unintentionally.

  • Ingestion

    • Parser timeout errors on ingested events that would occur at shutdown have now been fixed.

    • A gap in the statistics of ingest per day experienced by some organizations on the Usage Page and in humio-usage repository, causing the graph to drop to zero, has now been fixed. As a consequence of this fix, the first measurement performed with version 1.114 will result in the graph showing a peak, since it would include statistics from the period where calculations were skipped.

    • A parser that failed to construct would sometimes result in events receiving a null error. This issue has been fixed.

    • A digest coordination issue has been fixed: it could cause mini-segments to stay behind on old digest leaders when leadership changes.

  • Queries

    • Occasional error logging from QueryScheduler.reduceAndSetSnapshot has been fixed.

  • Functions

    • cidr() query function would fail to find some events when parameter negate=true was set. This incorrect behavior has now been fixed.

    • The cidr() function would handle a validation error incorrectly. This issue has been fixed.

    • The count() function with distinct parameter would give an incorrect count for utf8 strings. This issue has been fixed.

    • timeChart() and bucket() functions have been fixed as they would give slightly different results depending on whether their limit argument was left out or explicitly set to the default value.

Falcon LogScale 1.118.2 LTS (2024-01-17)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.118.2LTS2024-01-17

Cloud

2025-01-31No1.70.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.118.2/server-1.118.2.tar.gz

Bug fixes and updates.

Breaking Changes

The following items create a breaking change in the behavior, response or operation of this release.

  • Functions

    • The new parameter unit is added to formatTime() to specify whether the input field is in seconds or milliseconds, or if it should be auto-detected by the system.

      This is a breaking change: if you want to ensure fully backward-compatible behavior, set unit=milliseconds.

      For more information, see formatTime().

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.

Removed

Items that have been removed as of this release.

API

  • The deprecated REST endpoints api/v1/dataspaces/(id)/deleteevents and /api/v1/repositories/(id)/deleteevents have been removed. You can use the redactEvents GraphQL mutation and query instead.

    For more information, see redactEvents() .

Deprecation

Items that have been deprecated and may be removed in a future release.

  • GraphQL mutation updateOrganizationMutability is deprecated in favor of the new setBlockIngest mutation.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Automation and Alerts

    • We have changed how Scheduled Searches handle query warnings, similar to what was done for Standard Alerts (see Falcon LogScale 1.112.0 GA (2023-10-24)). Previously, LogScale only triggered Scheduled Searches if there were no query warnings. Now, scheduled searches will trigger despite most query warnings, and the scheduled search status will show a warning instead of an error.

      For query warnings about missing data, either due to ingest delay or some existing data that is currently unavailable, the scheduled search will retry for up to 10 minutes by default. This waiting time is configurable, see SCHEDULED_SEARCH_MAX_WAIT_FOR_MISSING_DATA for more information.

      Up until now, all query warnings were treated as errors: the scheduled search did not trigger even though it produced results, and the scheduled search was shown with an error in LogScale. Most query warnings meant that not all data was queried. The previous behaviour prevented the scheduled search from triggering in cases where it would not have, if all data had been available. For instance, a scheduled search that would trigger if a count of events dropped below a threshold. On the other hand, it made some scheduled searches not trigger, even though they would still have if all data was available. That meant that previously you would almost never have a scheduled search trigger when it should not, but you would sometimes have a scheduled search not trigger, when it should have. We have reverted this behavior.

      With this change, we no longer recommend to set the configuration option SCHEDULED_SEARCH_DESPITE_WARNINGS to true, since it treats all query warnings as non-errors, and there are a few query warnings that should make the scheduled search fail.

Upgrades

Changes that may occur or be required during an upgrade.

  • Configuration

    • We've migrated from Akka dependency component to Apache Pekko. This means that all internal logs referencing Akka will be substituted with the Pekko counterpart. Users will need to update any triggers or dashboards that rely on such logs.

      On Prem only: be aware that the Akka to Pekko migration also affects configuration field names in application.conf. Clusters that are using a custom application.conf will need to update their configuration to use the Pekko configuration names instead of the Akka configuration names.

New features and improvements

  • UI Changes

    • The Files page has a new layout and changes:

      • It has been split into two pages: one containing a list of files and one with details of each file.

      • A view limit of 100 MB has been added and you'll get an error in the UI if you try to view files larger than this size.

      • It displays information on the size limits and the step needed for syncing the imported files.

      For more information, see Files.

    • Parser test cases will automatically expand to the height of their content when loading the parser page now.

    • When selecting a parser test case, there is now a button to scroll to that test case again if you scroll away from it.

    • We have improved the navigation on the page for Alerts, Scheduled Searches and Actions and the page is now called Automation.

      For more information, see Automation.

    • Lookup Files require unique column headers to work as expected, which was previously validated when attempting to use the file. You could still install an invalid file into LogScale however, but now lookup files with duplicate header names are also blocked from being installed.

  • Automation and Alerts

    • LogScale now creates notifications for alerts and scheduled searches with warnings in addition to notifications for errors. The notifications for warnings will have a severity of warning.

    • When Filter Alerts encounter a query warning that could potentially affect the result of the alert, the warning is now saved with the alert, so that it is visible in the alerts overview, same as for Standard Alerts.

    • When clearing errors on alerts or scheduled searches, all notifications about the problem are now automatically deleted right when the error is cleared. Previously, notifications were only updated every 15 minutes. Note, that if the error returns, a new notification will be created.

  • GraphQL API

    • The redactEvents() mutation will no longer be allowed for users who have a limiting query prefix.

    • Added limits for GraphQL queries on the total number of selected fields and fragments. Defaults are 1000 for authenticated and 150 for unauthenticated users.

      Cluster administrators can adjust these limits with the GraphQLSelectionSizeLimit and UnauthenticatedGraphQLSelectionSizeLimit dynamic configurations.

    • The new setBlockIngest GraphQL mutation is introduced to block ingest for the organization and set ingest to paused in the dataspaces owned by the organization.

  • Storage

    • Handling of IOExceptions in part of the segment reading code has been improved. Such exceptions will cause the segment to be excluded from the query, and potentially refetched from bucket storage, and a warning to be shown to the user, rather than cancelling the query.

  • Configuration

    • Added validation for LOCAL_STORAGE_PERCENTAGE configuration against the targetDiskUsagePercentage, that might be set on runtime, to enforce that the LOCAL_STORAGE_PERCENTAGE variable is at least 5 percentage points larger than targetDiskUsagePercentage. Nodes that are violating this constraint will not be able to start. In addition, the setTargetDiskUsagePercentage mutation will not allow violating the constraint.

    • QueryMemoryLimit and LiveQueryMemoryLimit dynamic configurations have been replaced with QueryCoordinatorMemoryLimit, which controls the maximum memory usage of the coordinating node. This memory limit will, in turn, determine the limits of the static query state size and the live query state size. QueryCoordinatorMemoryLimit defaults to 400MB; QueryMemoryLimit and LiveQueryMemoryLimit defaults to 100MB regardless of their previous configuration.

      For more information, see General Limits & Parameters.

    • The new INITIAL_DISABLED_NODE_TASK environment variable is introduced.

      For more information, see INITIAL_DISABLED_NODE_TASK.

  • Dashboards and Widgets

    • Small multiples functionality is introduced for the Single Value, Gauge, and Pie Chart widgets. This feature allows you to partition your query result on a single dimension into multiple visuals of the same widget type for easy comparison.

      For more information, see Widgets.

    • We have added the new width option Fit to content for Event List columns. With this option selected, the width of the column depends on the content in the column.

    • Show thousands separator has been added as a configuration option of format Number for the Table widget.

  • Ingestion

    • When navigating between parser test cases, the table showing the outputs for the test case will now scroll to the top when you select a new test case.

    • A new mechanism is introduced that delays the response to a HTTP ingest request from nodes that also do digest when the digest node locally experiences digest lag. The following new dynamic configurations control this mechanism:

      • DelayIngestResponseDueToIngestLagMaxFactor limits how much longer than the actual execution it may be, measured as a factor on top of the actual time spent (default is 2).

      • DelayIngestResponseDueToIngestLagThreshold sets the number of milliseconds of digest lag where the feature starts to kick in (default is 20,000).

      • DelayIngestResponseDueToIngestLagScale sets the number of milliseconds of lag that adds 1 to the factor applied (default is 300,000).

  • Functions

    • The new query function duration() is introduced: it can be helpful in computations involving timestamps.

    • Live queries that use files in either match(), cidr(), or lookup() functions are no longer restarted when the file is updated. Instead the files are swapped while the queries are still running.

      For more information, see Lookup Files Operations.

    • The new query function parseUri() is introduced to support parsing of URIs without a scheme.

    • The new query function if() is introduced to compute one of two expressions depending on the outcome of a test.

Fixed in this release

  • UI Changes

    • Turned the dropdown menu in the TablePage upwards and set it to the front to fix a bug where the menu would be hidden.

    • The page for creating repository or view tokens would fail to load if the user didn't have a Change IP filters Organization settings permission.

  • Automation and Alerts

    • If a filter alert, standard alert or scheduled search was assigned to run on another node in the cluster, due to changes to the available cluster nodes, they would be wrongly marked as failing with an error like The alert is broken. Save the alert again to fix it and an error log. This issue is now fixed.

    • If an error occurred where the error message was huge, the error would not be stored on the failing alert or scheduled search. This issue has been fixed.

  • GraphQL API

    • Swapped parameters in GraphQL mutation updateOrganizationMutability have been fixed.

  • Storage

  • Dashboards and Widgets

    • The Gauge widget has been fixed as the Styling panel would not display configured thresholds.

    • The hovered series in Time Chart widget have been fixed as they would not be highlighted in the tooltip.

    • The options for precision and thousands separators in Table widget have been fixed as they would not be saved correctly when editing other widgets on the Search page.

    • The legend title in widget charts has been fixed as it would offset the content when positioned to the right.

    • The Styling panel in the Table widget has been fixed as threshold coloring could be assigned unintentionally.

  • Ingestion

    • Parser timeout errors on ingested events that would occur at shutdown have now been fixed.

    • A gap in the statistics of ingest per day experienced by some organizations on the Usage Page and in humio-usage repository, causing the graph to drop to zero, has now been fixed. As a consequence of this fix, the first measurement performed with version 1.114 will result in the graph showing a peak, since it would include statistics from the period where calculations were skipped.

    • A parser that failed to construct would sometimes result in events receiving a null error. This issue has been fixed.

    • A digest coordination issue has been fixed: it could cause mini-segments to stay behind on old digest leaders when leadership changes.

  • Queries

    • Occasional error logging from QueryScheduler.reduceAndSetSnapshot has been fixed.

  • Functions

    • cidr() query function would fail to find some events when parameter negate=true was set. This incorrect behavior has now been fixed.

    • The cidr() function would handle a validation error incorrectly. This issue has been fixed.

    • The count() function with distinct parameter would give an incorrect count for utf8 strings. This issue has been fixed.

    • timeChart() and bucket() functions have been fixed as they would give slightly different results depending on whether their limit argument was left out or explicitly set to the default value.

Falcon LogScale 1.118.1 Internal (2023-12-20)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.118.1Internal2023-12-20

Internal Only

2024-12-31No1.70.0No

Available for download two days after release.

Internal-only release.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.

Falcon LogScale 1.118.0 GA (2023-12-12)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.118.0GA2023-12-12

Cloud

2025-01-31No1.70.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.

Removed

Items that have been removed as of this release.

API

  • The deprecated REST endpoints api/v1/dataspaces/(id)/deleteevents and /api/v1/repositories/(id)/deleteevents have been removed. You can use the redactEvents GraphQL mutation and query instead.

    For more information, see redactEvents() .

New features and improvements

  • UI Changes

    • We have improved the navigation on the page for Alerts, Scheduled Searches and Actions and the page is now called Automation.

      For more information, see Automation.

  • Dashboards and Widgets

    • Small multiples functionality is introduced for the Single Value, Gauge, and Pie Chart widgets. This feature allows you to partition your query result on a single dimension into multiple visuals of the same widget type for easy comparison.

      For more information, see Widgets.

    • We have added the new width option Fit to content for Event List columns. With this option selected, the width of the column depends on the content in the column.

    • Event List and Table widgets now support custom date time formats.

Falcon LogScale 1.117.0 GA (2023-12-05)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.117.0GA2023-12-05

Cloud

2025-01-31No1.70.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • GraphQL mutation updateOrganizationMutability is deprecated in favor of the new setBlockIngest mutation.

New features and improvements

  • GraphQL API

    • The new setBlockIngest GraphQL mutation is introduced to block ingest for the organization and set ingest to paused in the dataspaces owned by the organization.

  • Configuration

  • Functions

    • Live queries that use files in either match(), cidr(), or lookup() functions are no longer restarted when the file is updated. Instead the files are swapped while the queries are still running.

      For more information, see Lookup Files Operations.

Fixed in this release

  • GraphQL API

    • Swapped parameters in GraphQL mutation updateOrganizationMutability have been fixed.

  • Dashboards and Widgets

    • The Styling panel in the Table widget has been fixed as threshold coloring could be assigned unintentionally.

  • Functions

    • timeChart() and bucket() functions have been fixed as they would give slightly different results depending on whether their limit argument was left out or explicitly set to the default value.

Falcon LogScale 1.116.0 GA (2023-11-28)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.116.0GA2023-11-28

Cloud

2025-01-31No1.70.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.

Upgrades

Changes that may occur or be required during an upgrade.

  • Configuration

    • We've migrated from Akka dependency component to Apache Pekko. This means that all internal logs referencing Akka will be substituted with the Pekko counterpart. Users will need to update any triggers or dashboards that rely on such logs.

      On Prem only: be aware that the Akka to Pekko migration also affects configuration field names in application.conf. Clusters that are using a custom application.conf will need to update their configuration to use the Pekko configuration names instead of the Akka configuration names.

New features and improvements

  • Storage

    • Handling of IOExceptions in part of the segment reading code has been improved. Such exceptions will cause the segment to be excluded from the query, and potentially refetched from bucket storage, and a warning to be shown to the user, rather than cancelling the query.

  • Configuration

    • QueryMemoryLimit and LiveQueryMemoryLimit dynamic configurations have been replaced with QueryCoordinatorMemoryLimit, which controls the maximum memory usage of the coordinating node. This memory limit will, in turn, determine the limits of the static query state size and the live query state size. QueryCoordinatorMemoryLimit defaults to 400MB; QueryMemoryLimit and LiveQueryMemoryLimit defaults to 100MB regardless of their previous configuration.

      For more information, see General Limits & Parameters.

Fixed in this release

  • Dashboards and Widgets

    • The hovered series in Time Chart widget have been fixed as they would not be highlighted in the tooltip.

    • The options for precision and thousands separators in Table widget have been fixed as they would not be saved correctly when editing other widgets on the Search page.

    • The legend title in widget charts has been fixed as it would offset the content when positioned to the right.

  • Functions

    • The cidr() function would handle a validation error incorrectly. This issue has been fixed.

    • The count() function with distinct parameter would give an incorrect count for utf8 strings. This issue has been fixed.

Falcon LogScale 1.115.0 GA (2023-11-21)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.115.0GA2023-11-21

Cloud

2025-01-31No1.70.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.

New features and improvements

  • UI Changes

    • The Files page has a new layout and changes:

      • It has been split into two pages: one containing a list of files and one with details of each file.

      • A view limit of 100 MB has been added and you'll get an error in the UI if you try to view files larger than this size.

      • It displays information on the size limits and the step needed for syncing the imported files.

      For more information, see Files.

    • Parser test cases will automatically expand to the height of their content when loading the parser page now.

    • When selecting a parser test case, there is now a button to scroll to that test case again if you scroll away from it.

  • Automation and Alerts

    • LogScale now creates notifications for alerts and scheduled searches with warnings in addition to notifications for errors. The notifications for warnings will have a severity of warning.

  • Ingestion

    • A new mechanism is introduced that delays the response to a HTTP ingest request from nodes that also do digest when the digest node locally experiences digest lag. The following new dynamic configurations control this mechanism:

      • DelayIngestResponseDueToIngestLagMaxFactor limits how much longer than the actual execution it may be, measured as a factor on top of the actual time spent (default is 2).

      • DelayIngestResponseDueToIngestLagThreshold sets the number of milliseconds of digest lag where the feature starts to kick in (default is 20,000).

      • DelayIngestResponseDueToIngestLagScale sets the number of milliseconds of lag that adds 1 to the factor applied (default is 300,000).

Fixed in this release

  • UI Changes

    • Turned the dropdown menu in the TablePage upwards and set it to the front to fix a bug where the menu would be hidden.

  • Storage

  • Dashboards and Widgets

    • The Gauge widget has been fixed as the Styling panel would not display configured thresholds.

  • Ingestion

    • A parser that failed to construct would sometimes result in events receiving a null error. This issue has been fixed.

    • A digest coordination issue has been fixed: it could cause mini-segments to stay behind on old digest leaders when leadership changes.

  • Queries

    • Occasional error logging from QueryScheduler.reduceAndSetSnapshot has been fixed.

Falcon LogScale 1.114.0 GA (2023-11-14)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.114.0GA2023-11-14

Cloud

2025-01-31No1.70.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.

New features and improvements

  • Automation and Alerts

    • When Filter Alerts encounter a query warning that could potentially affect the result of the alert, the warning is now saved with the alert, so that it is visible in the alerts overview, same as for Standard Alerts.

Fixed in this release

  • Automation and Alerts

    • If an error occurred where the error message was huge, the error would not be stored on the failing alert or scheduled search. This issue has been fixed.

  • Storage

  • Ingestion

    • A gap in the statistics of ingest per day experienced by some organizations on the Usage Page and in humio-usage repository, causing the graph to drop to zero, has now been fixed. As a consequence of this fix, the first measurement performed with version 1.114 will result in the graph showing a peak, since it would include statistics from the period where calculations were skipped.

Falcon LogScale 1.113.0 GA (2023-11-09)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.113.0GA2023-11-09

Cloud

2025-01-31No1.70.0No

Available for download two days after release.

Bug fixes and updates.

Breaking Changes

The following items create a breaking change in the behavior, response or operation of this release.

  • Functions

    • The new parameter unit is added to formatTime() to specify whether the input field is in seconds or milliseconds, or if it should be auto-detected by the system.

      This is a breaking change: if you want to ensure fully backward-compatible behavior, set unit=milliseconds.

      For more information, see formatTime().

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Automation and Alerts

    • We have changed how Scheduled Searches handle query warnings, similar to what was done for Standard Alerts (see Falcon LogScale 1.112.0 GA (2023-10-24)). Previously, LogScale only triggered Scheduled Searches if there were no query warnings. Now, scheduled searches will trigger despite most query warnings, and the scheduled search status will show a warning instead of an error.

      For query warnings about missing data, either due to ingest delay or some existing data that is currently unavailable, the scheduled search will retry for up to 10 minutes by default. This waiting time is configurable, see SCHEDULED_SEARCH_MAX_WAIT_FOR_MISSING_DATA for more information.

      Up until now, all query warnings were treated as errors: the scheduled search did not trigger even though it produced results, and the scheduled search was shown with an error in LogScale. Most query warnings meant that not all data was queried. The previous behaviour prevented the scheduled search from triggering in cases where it would not have, if all data had been available. For instance, a scheduled search that would trigger if a count of events dropped below a threshold. On the other hand, it made some scheduled searches not trigger, even though they would still have if all data was available. That meant that previously you would almost never have a scheduled search trigger when it should not, but you would sometimes have a scheduled search not trigger, when it should have. We have reverted this behavior.

      With this change, we no longer recommend to set the configuration option SCHEDULED_SEARCH_DESPITE_WARNINGS to true, since it treats all query warnings as non-errors, and there are a few query warnings that should make the scheduled search fail.

New features and improvements

  • UI Changes

    • Lookup Files require unique column headers to work as expected, which was previously validated when attempting to use the file. You could still install an invalid file into LogScale however, but now lookup files with duplicate header names are also blocked from being installed.

  • Automation and Alerts

    • When clearing errors on alerts or scheduled searches, all notifications about the problem are now automatically deleted right when the error is cleared. Previously, notifications were only updated every 15 minutes. Note, that if the error returns, a new notification will be created.

  • GraphQL API

    • The redactEvents() mutation will no longer be allowed for users who have a limiting query prefix.

  • Configuration

    • Added validation for LOCAL_STORAGE_PERCENTAGE configuration against the targetDiskUsagePercentage, that might be set on runtime, to enforce that the LOCAL_STORAGE_PERCENTAGE variable is at least 5 percentage points larger than targetDiskUsagePercentage. Nodes that are violating this constraint will not be able to start. In addition, the setTargetDiskUsagePercentage mutation will not allow violating the constraint.

  • Dashboards and Widgets

    • Show thousands separator has been added as a configuration option of format Number for the Table widget.

  • Ingestion

    • When navigating between parser test cases, the table showing the outputs for the test case will now scroll to the top when you select a new test case.

  • Functions

    • The new query function duration() is introduced: it can be helpful in computations involving timestamps.

    • The new query function parseUri() is introduced to support parsing of URIs without a scheme.

    • The new query function if() is introduced to compute one of two expressions depending on the outcome of a test.

Fixed in this release

  • UI Changes

    • The page for creating repository or view tokens would fail to load if the user didn't have a Change IP filters Organization settings permission.

  • Automation and Alerts

    • If a filter alert, standard alert or scheduled search was assigned to run on another node in the cluster, due to changes to the available cluster nodes, they would be wrongly marked as failing with an error like The alert is broken. Save the alert again to fix it and an error log. This issue is now fixed.

  • Ingestion

    • Parser timeout errors on ingested events that would occur at shutdown have now been fixed.

  • Functions

    • cidr() query function would fail to find some events when parameter negate=true was set. This incorrect behavior has now been fixed.

Falcon LogScale 1.112.4 LTS (2024-02-23)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.112.4LTS2024-02-23

Cloud

2024-11-30No1.70.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.112.4/server-1.112.4.tar.gz

These notes include entries from the following previous releases: 1.112.1, 1.112.2, 1.112.3

Bug fixes and performance improvements.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.

Removed

Items that have been removed as of this release.

Installation and Deployment

  • All ZooKeeper-related functionality for LogScale was deprecated in December 2022, and is now removed:

    • Removed the ZooKeeper status page from the User Interface

    • Removed the ZooKeeper related GraphQL mutations

    • Removed the migration support for node IDs created by ZooKeeper, as we no longer support upgrading from version prior to 1.70.

    Depending on your chosen Kafka deployment, ZooKeeper may still be required to support Kafka.

  • Running on Java 11, 12, 13, 14, 15 and 16 is no longer supported. The minimum supported Java version is 17 starting from this LogScale release.

GraphQL API

  • The deprecated client mutation ID concept is now being removed from the GraphQL API:

    • Removed the clientMutationId argument for a lot of mutations.

    • Removed the clientMutationId field from the returned type for a lot of mutations.

    • Renamed the ClientMutationID datatype, that was returned from some mutations to BooleanResultType datatype. Removed the clientMutationId field on the returned type and replaced it by a boolean field named result.

  • Most deprecated queries, mutations and fields have now been removed from the GraphQL API.

Storage

  • The unused humio-backup symlink inside Docker containers has been removed.

Configuration

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The following REST endpoints for deleting events have been deprecated:

    • /api/v1/dataspaces/(Id)/deleteevents

    • /api/v1/repositories/(id)/deleteevents

    The new GraphQL mutation redactEvents should be used instead.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Automation and Alerts

    • We have changed how Standard Alerts handle query warnings. Previously, LogScale only triggered alerts if there were no query warnings. Now, alerts will trigger despite most query warnings, and the alert status will show a warning instead of an error. Up until now, all query warnings were treated as errors. This meant that the alert did not trigger even though it produced results, and the alert was shown with an error in LogScale. Most query warnings mean that not all data was queried. The previous behaviour prevented the alert from triggering in cases where it would not have, if all data had been available. For instance, an alert that would trigger if a count of events dropped below a threshold. On the other hand, it made some alerts not trigger, even though they would still have if all data was available. That meant that previously you would almost never get an alert that you should not have gotten, but you would sometime not get an alert that you should have gotten. We have reverted this. With this change, we no longer recommend to set the configuration option ALERT_DESPITE_WARNINGS to true, since it treats all query warnings as non-errors, and there are a few query warnings that should make the alert fail.

      For more information, see Diagnosing Alerts.

Upgrades

Changes that may occur or be required during an upgrade.

  • Security

  • Configuration

    • Docker containers have been upgraded to Java 21.

New features and improvements

  • Installation and Deployment

    • Configure LogScale to write fatal JVM error logs in the JVM logging directory, which is specified using JVM_LOG_DIR variable. The default directory is /logs/humio.

  • UI Changes

    • Most tables inside the LogScale UI now supports resizing columns, except the Table widget used during search.

    • The behavior of the ComboBox has changed: the drop-down is not filtered until the text in the filter field has been edited, allowing you to easily copy, alter or clear the text.

    • The list of permissions now has a specific custom order in the UI, as follows.

      • Organization:

        1. Organization settings

        2. Repository and view management

        3. Permissions and user management

        4. Fleet management

        5. Query monitoring

        6. Other

      • Cluster management:

        1. Cluster management

        2. Organization management

        3. Subdomains

        4. Others

    • A combined view of permissions is now available to show all roles listed together when there is more than one role under each repository, organization, or system.

      For more information, see Aggregate Permissions.

    • It is now possible to highlight results based on the filters applied in queries. This helps significantly when trying to understand why a query matches the results or when looking for a specific part of the events text.

      For more information, see Filter Match Highlighting.

  • Automation and Alerts

    • The new button Import from has been added to the Scheduled Searches form allowing importing a Scheduled Search from template or package.

    • When creating or updating Scheduled Searches using the GraphQL API, it is now possible to refer to actions in Packages using a qualified name of \"packagescope/packagename:actionname\". Actions in packages will no longer be found if using an unqualified name.

    • When generating CSV files for attaching to emails or uploading to LogScale in actions, or when using the message template {events_html}, the field @ingesttimestamp is now formatted similar to how @timestamp is.

    • The UI flow for Scheduled Searches has been updated: when you click on New Scheduled Search it will directly go to the New Scheduled Search form.

    • The Alert forms will not show any errors when the alert is disabled.

  • GraphQL API

    • Added limits for GraphQL queries on the total number of selected fields and fragments. Defaults are 1000 for authenticated and 150 for unauthenticated users.

      Cluster administrators can adjust these limits with the GraphQLSelectionSizeLimit and UnauthenticatedGraphQLSelectionSizeLimit dynamic configurations.

    • The contentHash field on the File output type has been reintroduced.

  • Storage

    • JVM_TMP_DIR has been added to the launcher script. This option is used for configuring java.io.tmpdir and jna.tmpdir for the JVM. The directory will default to jvm-tmp inside the directory specified by the DIRECTORY setting. This default should alleviate issues starting LogScale on some systems due to the /tmp directory being marked as noexec.

      For more information, see Troubleshooting: Error Starting LogScale due to Exec permissions on /tmp.

    • Bucket storage cleaning of tmp files now only runs on a few nodes in the cluster rather than on all nodes.

  • Configuration

    • LOCAL_STORAGE_PREFILL_PERCENTAGE new configuration option has been added.

      For more information, see LOCAL_STORAGE_PREFILL_PERCENTAGE.

    • Query queueing based on the available memory in query coordinator is enabled by default by treating dynamic configuration QueryCoordinatorMaxHeapFraction as 0.5, if it has not been set. To disable queing, set QueryCoordinatorMaxHeapFraction to 1000.

    • Set the default value of LOCAL_STORAGE_PERCENTAGE to 85, and the minimum value to 0. The default was previously to leave this unset, which is not safe in clusters where bucket storage contains more data than will fit on local drives.

    • The new environment variable DISABLE_BUCKET_CLEANING_TMP_FILES has been introduced. It allows to reduce the amount of listing of tmp files in bucket.

  • Dashboards and Widgets

    • You can enable the export of Dashboards to a PDF file, with many options available to control the output layout and formatting.

      The feature is available to all users who already have access to dashboard data. This is the first of two feature releases, aiming to provide full schedulable PDF reporting capabilities to LogScale.

      For more information, see Export Dashboards as PDF.

    • The new Gauge widget is introduced: it allows you to represent values on a fixed scale, offering a visual and intuitive way to monitor key performance metrics.

      For more information, see Gauge Widget.

    • A parameter configuration option has been added to support invalidation of parameter inputs. The format for this is a comma separated list of invalid input patterns (regexes).

    • Introduced a new style option Show 'Others' to the Time Chart Widget: it allows you to show/hide other series when there are more series than the maximum allowed in the chart.

    • A parameter configuration option has been added to allow setting a custom message when a parameter input is invalid.

    • New formatting options have been introduced for the Table widget, to get actionable insights from your data faster:

      • Conditional formatting of table cells

      • Text wrapping and column resizing

      • Row numbering

      • Number formatting

      • Link formatting

      • Columns hiding

      For more information, see Table Widget.

  • Ingestion

    • When writing parsers, the fields produced by a test case are now available for autocompletion in the editor.

      For more information, see Using the Parser Code Editor.

  • Log Collector

    • The Fleet Management tab on Fleet Overview page is now renamed to Data Ingest.

  • Functions

  • Packages

    • Filter alerts and Standard alerts are now shown in the same tab Alerts under Assets when installing or viewing installed Packages.

    • It is now possible to see the type of action in Packages (Marketplace, Installed and Create a package).

Fixed in this release

  • UI Changes

    • Queries could "flicker" for a short period causing "negative alerts" to trigger for no reason (negative alerts are alerts that check for the absence of events). This issue has been fixed.

    • The following issue has been fixed on the Search page: if regular expressions contained named groups with special characters (underscore _ for example) a recent change with the introduction of Filter Match Highlighting would cause a server error and hang the UI.

    • The following items about Saving Queries have been fixed:

      • The Search... field for saved queries did not return what would be expected.

      • Upon reopening the Queries dropdown after having filled out the Search... field, the text would still be present in the Search... field but not filter on the queries.

      • Added focus on the Search... field when reopening the Queries dropdown.

  • Automation and Alerts

    • Notifications on problems with Filter Alerts where not automatically removed when the problem was solved. This issue is now fixed.

    • Filter alerts that could fail right after a cluster restart have now been fixed.

    • When used with Filter Alerts, the {events_html} message template would not keep the order of the fields from the Alert query.

  • GraphQL API

    • When trying to delete an Alert, Scheduled Search or Dashboard using a mutation for one of the other types, it would end up in a state where it was not deleted, but could not run either. This issue is now fixed.

  • Storage

    • A workaround solution has been identified for those cases where segment files on local disk no longer pass their internal checksum test and are detected as "broken" by the background merge process.

      1. Ensure a copy of the local file is present in the bucket storage, backing up the cluster

      2. Delete the local copy

      As a result, any merge attempt involving that file will succeed after the next restart of LogScale.

    • Fixed an issue that could cause repositories undeleted using the mechanism described at Restoring a Repository or View to be only partially restored. Some deleted datasources within the repositories could erroneously be skipped during restoration.

      For more information, see Restoring a Repository or View.

  • Dashboards and Widgets

    • Field values containing % would not be resolved correctly in interactions. This issue has been fixed.

  • Ingestion

    • The buttons used for editing and deleting an ingest listener were overlapping in Safari on the Ingest Listeners page under a repository. This issue has been fixed.

  • Functions

    • Results for empty buckets didn't include the steps after the first aggregator of the subquery. This issue has now been fixed.

    • match() function using a json file and containing an object with a missing field, could lead to an internal error.

    • The regex() function has been fixed for cases where \Q...\E could cause problems for named capturing groups.

    • The array:filter() function has been fixed for an issue that caused incorrect output element values in certain circumstances.

  • Other

    • A cluster with very little disk space left could result in excessive logging from com.humio.distribution.RendezvousSegmentDistribution.

    • Fixing a race that can leave a query in a state where it will cause an excessive amount of 404 HTTP requests. This adds unnecessary noise and a bit of extra load to the system.

    • A minor logging issue has been fixed: ClusterHostAliveStats would log that hosts were "changed from being considered dead to alive" on hosts that had just rebooted, when such hosts actually consider all other nodes alive for a little while, to allow the booting node some time to hear heartbeats from others.

    • A boot-time version checking issue could cause LogScale to crash on boot, if joining a fresh cluster, and the first node to join that cluster would crash.

  • Packages

    • Updating of a Package failed when using anything other than a personal user token. This issue has been fixed.

    • Updating a package with a lookup file and a parser/scheduled search/filter alert/alert containing match would fail if the new column parameter did not exist in the old lookup file. This issue has now been fixed.

    • Aligned the requirements to allow all tokens (with the correct permissions) to install and update Packages.

    • Fixed a broken link from saved query asset in Packages to Search page.

    • The alert types in Package Marketplace were showing twice — this is now fixed so it properly shows one type as expected.

Improvement

  • Storage

    • Allowed reassignment of digest that assigns partitions unevenly to hosts. This is to support clusters where hosts are not evenly sized, and so an even partition assignment is not expected.

Falcon LogScale 1.112.3 LTS (2024-01-30)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.112.3LTS2024-01-30

Cloud

2024-11-30No1.70.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.112.3/server-1.112.3.tar.gz

These notes include entries from the following previous releases: 1.112.1, 1.112.2

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.

Removed

Items that have been removed as of this release.

Installation and Deployment

  • All ZooKeeper-related functionality for LogScale was deprecated in December 2022, and is now removed:

    • Removed the ZooKeeper status page from the User Interface

    • Removed the ZooKeeper related GraphQL mutations

    • Removed the migration support for node IDs created by ZooKeeper, as we no longer support upgrading from version prior to 1.70.

    Depending on your chosen Kafka deployment, ZooKeeper may still be required to support Kafka.

  • Running on Java 11, 12, 13, 14, 15 and 16 is no longer supported. The minimum supported Java version is 17 starting from this LogScale release.

GraphQL API

  • The deprecated client mutation ID concept is now being removed from the GraphQL API:

    • Removed the clientMutationId argument for a lot of mutations.

    • Removed the clientMutationId field from the returned type for a lot of mutations.

    • Renamed the ClientMutationID datatype, that was returned from some mutations to BooleanResultType datatype. Removed the clientMutationId field on the returned type and replaced it by a boolean field named result.

  • Most deprecated queries, mutations and fields have now been removed from the GraphQL API.

Storage

  • The unused humio-backup symlink inside Docker containers has been removed.

Configuration

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The following REST endpoints for deleting events have been deprecated:

    • /api/v1/dataspaces/(Id)/deleteevents

    • /api/v1/repositories/(id)/deleteevents

    The new GraphQL mutation redactEvents should be used instead.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Automation and Alerts

    • We have changed how Standard Alerts handle query warnings. Previously, LogScale only triggered alerts if there were no query warnings. Now, alerts will trigger despite most query warnings, and the alert status will show a warning instead of an error. Up until now, all query warnings were treated as errors. This meant that the alert did not trigger even though it produced results, and the alert was shown with an error in LogScale. Most query warnings mean that not all data was queried. The previous behaviour prevented the alert from triggering in cases where it would not have, if all data had been available. For instance, an alert that would trigger if a count of events dropped below a threshold. On the other hand, it made some alerts not trigger, even though they would still have if all data was available. That meant that previously you would almost never get an alert that you should not have gotten, but you would sometime not get an alert that you should have gotten. We have reverted this. With this change, we no longer recommend to set the configuration option ALERT_DESPITE_WARNINGS to true, since it treats all query warnings as non-errors, and there are a few query warnings that should make the alert fail.

      For more information, see Diagnosing Alerts.

Upgrades

Changes that may occur or be required during an upgrade.

  • Security

  • Configuration

    • Docker containers have been upgraded to Java 21.

New features and improvements

  • Installation and Deployment

    • Configure LogScale to write fatal JVM error logs in the JVM logging directory, which is specified using JVM_LOG_DIR variable. The default directory is /logs/humio.

  • UI Changes

    • Most tables inside the LogScale UI now supports resizing columns, except the Table widget used during search.

    • The behavior of the ComboBox has changed: the drop-down is not filtered until the text in the filter field has been edited, allowing you to easily copy, alter or clear the text.

    • The list of permissions now has a specific custom order in the UI, as follows.

      • Organization:

        1. Organization settings

        2. Repository and view management

        3. Permissions and user management

        4. Fleet management

        5. Query monitoring

        6. Other

      • Cluster management:

        1. Cluster management

        2. Organization management

        3. Subdomains

        4. Others

    • A combined view of permissions is now available to show all roles listed together when there is more than one role under each repository, organization, or system.

      For more information, see Aggregate Permissions.

    • It is now possible to highlight results based on the filters applied in queries. This helps significantly when trying to understand why a query matches the results or when looking for a specific part of the events text.

      For more information, see Filter Match Highlighting.

  • Automation and Alerts

    • The new button Import from has been added to the Scheduled Searches form allowing importing a Scheduled Search from template or package.

    • When creating or updating Scheduled Searches using the GraphQL API, it is now possible to refer to actions in Packages using a qualified name of \"packagescope/packagename:actionname\". Actions in packages will no longer be found if using an unqualified name.

    • When generating CSV files for attaching to emails or uploading to LogScale in actions, or when using the message template {events_html}, the field @ingesttimestamp is now formatted similar to how @timestamp is.

    • The UI flow for Scheduled Searches has been updated: when you click on New Scheduled Search it will directly go to the New Scheduled Search form.

    • The Alert forms will not show any errors when the alert is disabled.

  • GraphQL API

    • Added limits for GraphQL queries on the total number of selected fields and fragments. Defaults are 1000 for authenticated and 150 for unauthenticated users.

      Cluster administrators can adjust these limits with the GraphQLSelectionSizeLimit and UnauthenticatedGraphQLSelectionSizeLimit dynamic configurations.

    • The contentHash field on the File output type has been reintroduced.

  • Storage

    • JVM_TMP_DIR has been added to the launcher script. This option is used for configuring java.io.tmpdir and jna.tmpdir for the JVM. The directory will default to jvm-tmp inside the directory specified by the DIRECTORY setting. This default should alleviate issues starting LogScale on some systems due to the /tmp directory being marked as noexec.

      For more information, see Troubleshooting: Error Starting LogScale due to Exec permissions on /tmp.

    • Bucket storage cleaning of tmp files now only runs on a few nodes in the cluster rather than on all nodes.

  • Configuration

    • LOCAL_STORAGE_PREFILL_PERCENTAGE new configuration option has been added.

      For more information, see LOCAL_STORAGE_PREFILL_PERCENTAGE.

    • Query queueing based on the available memory in query coordinator is enabled by default by treating dynamic configuration QueryCoordinatorMaxHeapFraction as 0.5, if it has not been set. To disable queing, set QueryCoordinatorMaxHeapFraction to 1000.

    • Set the default value of LOCAL_STORAGE_PERCENTAGE to 85, and the minimum value to 0. The default was previously to leave this unset, which is not safe in clusters where bucket storage contains more data than will fit on local drives.

    • The new environment variable DISABLE_BUCKET_CLEANING_TMP_FILES has been introduced. It allows to reduce the amount of listing of tmp files in bucket.

  • Dashboards and Widgets

    • You can enable the export of Dashboards to a PDF file, with many options available to control the output layout and formatting.

      The feature is available to all users who already have access to dashboard data. This is the first of two feature releases, aiming to provide full schedulable PDF reporting capabilities to LogScale.

      For more information, see Export Dashboards as PDF.

    • The new Gauge widget is introduced: it allows you to represent values on a fixed scale, offering a visual and intuitive way to monitor key performance metrics.

      For more information, see Gauge Widget.

    • A parameter configuration option has been added to support invalidation of parameter inputs. The format for this is a comma separated list of invalid input patterns (regexes).

    • Introduced a new style option Show 'Others' to the Time Chart Widget: it allows you to show/hide other series when there are more series than the maximum allowed in the chart.

    • A parameter configuration option has been added to allow setting a custom message when a parameter input is invalid.

    • New formatting options have been introduced for the Table widget, to get actionable insights from your data faster:

      • Conditional formatting of table cells

      • Text wrapping and column resizing

      • Row numbering

      • Number formatting

      • Link formatting

      • Columns hiding

      For more information, see Table Widget.

  • Ingestion

    • When writing parsers, the fields produced by a test case are now available for autocompletion in the editor.

      For more information, see Using the Parser Code Editor.

  • Log Collector

    • The Fleet Management tab on Fleet Overview page is now renamed to Data Ingest.

  • Functions

  • Packages

    • Filter alerts and Standard alerts are now shown in the same tab Alerts under Assets when installing or viewing installed Packages.

    • It is now possible to see the type of action in Packages (Marketplace, Installed and Create a package).

Fixed in this release

  • UI Changes

    • Queries could "flicker" for a short period causing "negative alerts" to trigger for no reason (negative alerts are alerts that check for the absence of events). This issue has been fixed.

    • The following issue has been fixed on the Search page: if regular expressions contained named groups with special characters (underscore _ for example) a recent change with the introduction of Filter Match Highlighting would cause a server error and hang the UI.

    • The following items about Saving Queries have been fixed:

      • The Search... field for saved queries did not return what would be expected.

      • Upon reopening the Queries dropdown after having filled out the Search... field, the text would still be present in the Search... field but not filter on the queries.

      • Added focus on the Search... field when reopening the Queries dropdown.

  • Automation and Alerts

    • Notifications on problems with Filter Alerts where not automatically removed when the problem was solved. This issue is now fixed.

    • Filter alerts that could fail right after a cluster restart have now been fixed.

    • When used with Filter Alerts, the {events_html} message template would not keep the order of the fields from the Alert query.

  • GraphQL API

    • When trying to delete an Alert, Scheduled Search or Dashboard using a mutation for one of the other types, it would end up in a state where it was not deleted, but could not run either. This issue is now fixed.

  • Storage

    • A workaround solution has been identified for those cases where segment files on local disk no longer pass their internal checksum test and are detected as "broken" by the background merge process.

      1. Ensure a copy of the local file is present in the bucket storage, backing up the cluster

      2. Delete the local copy

      As a result, any merge attempt involving that file will succeed after the next restart of LogScale.

  • Dashboards and Widgets

    • Field values containing % would not be resolved correctly in interactions. This issue has been fixed.

  • Ingestion

    • The buttons used for editing and deleting an ingest listener were overlapping in Safari on the Ingest Listeners page under a repository. This issue has been fixed.

  • Functions

    • Results for empty buckets didn't include the steps after the first aggregator of the subquery. This issue has now been fixed.

    • match() function using a json file and containing an object with a missing field, could lead to an internal error.

    • The regex() function has been fixed for cases where \Q...\E could cause problems for named capturing groups.

    • The array:filter() function has been fixed for an issue that caused incorrect output element values in certain circumstances.

  • Other

    • A cluster with very little disk space left could result in excessive logging from com.humio.distribution.RendezvousSegmentDistribution.

    • Fixing a race that can leave a query in a state where it will cause an excessive amount of 404 HTTP requests. This adds unnecessary noise and a bit of extra load to the system.

    • A minor logging issue has been fixed: ClusterHostAliveStats would log that hosts were "changed from being considered dead to alive" on hosts that had just rebooted, when such hosts actually consider all other nodes alive for a little while, to allow the booting node some time to hear heartbeats from others.

    • A boot-time version checking issue could cause LogScale to crash on boot, if joining a fresh cluster, and the first node to join that cluster would crash.

  • Packages

    • Updating of a Package failed when using anything other than a personal user token. This issue has been fixed.

    • Updating a package with a lookup file and a parser/scheduled search/filter alert/alert containing match would fail if the new column parameter did not exist in the old lookup file. This issue has now been fixed.

    • Aligned the requirements to allow all tokens (with the correct permissions) to install and update Packages.

    • Fixed a broken link from saved query asset in Packages to Search page.

    • The alert types in Package Marketplace were showing twice — this is now fixed so it properly shows one type as expected.

Falcon LogScale 1.112.2 LTS (2024-01-22)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.112.2LTS2024-01-22

Cloud

2024-11-30No1.70.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.112.2/server-1.112.2.tar.gz

These notes include entries from the following previous releases: 1.112.1

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.

Removed

Items that have been removed as of this release.

Installation and Deployment

  • All ZooKeeper-related functionality for LogScale was deprecated in December 2022, and is now removed:

    • Removed the ZooKeeper status page from the User Interface

    • Removed the ZooKeeper related GraphQL mutations

    • Removed the migration support for node IDs created by ZooKeeper, as we no longer support upgrading from version prior to 1.70.

    Depending on your chosen Kafka deployment, ZooKeeper may still be required to support Kafka.

  • Running on Java 11, 12, 13, 14, 15 and 16 is no longer supported. The minimum supported Java version is 17 starting from this LogScale release.

GraphQL API

  • The deprecated client mutation ID concept is now being removed from the GraphQL API:

    • Removed the clientMutationId argument for a lot of mutations.

    • Removed the clientMutationId field from the returned type for a lot of mutations.

    • Renamed the ClientMutationID datatype, that was returned from some mutations to BooleanResultType datatype. Removed the clientMutationId field on the returned type and replaced it by a boolean field named result.

  • Most deprecated queries, mutations and fields have now been removed from the GraphQL API.

Storage

  • The unused humio-backup symlink inside Docker containers has been removed.

Configuration

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The following REST endpoints for deleting events have been deprecated:

    • /api/v1/dataspaces/(Id)/deleteevents

    • /api/v1/repositories/(id)/deleteevents

    The new GraphQL mutation redactEvents should be used instead.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Automation and Alerts

    • We have changed how Standard Alerts handle query warnings. Previously, LogScale only triggered alerts if there were no query warnings. Now, alerts will trigger despite most query warnings, and the alert status will show a warning instead of an error. Up until now, all query warnings were treated as errors. This meant that the alert did not trigger even though it produced results, and the alert was shown with an error in LogScale. Most query warnings mean that not all data was queried. The previous behaviour prevented the alert from triggering in cases where it would not have, if all data had been available. For instance, an alert that would trigger if a count of events dropped below a threshold. On the other hand, it made some alerts not trigger, even though they would still have if all data was available. That meant that previously you would almost never get an alert that you should not have gotten, but you would sometime not get an alert that you should have gotten. We have reverted this. With this change, we no longer recommend to set the configuration option ALERT_DESPITE_WARNINGS to true, since it treats all query warnings as non-errors, and there are a few query warnings that should make the alert fail.

      For more information, see Diagnosing Alerts.

Upgrades

Changes that may occur or be required during an upgrade.

  • Security

  • Configuration

    • Docker containers have been upgraded to Java 21.

New features and improvements

  • Installation and Deployment

    • Configure LogScale to write fatal JVM error logs in the JVM logging directory, which is specified using JVM_LOG_DIR variable. The default directory is /logs/humio.

  • UI Changes

    • Most tables inside the LogScale UI now supports resizing columns, except the Table widget used during search.

    • The behavior of the ComboBox has changed: the drop-down is not filtered until the text in the filter field has been edited, allowing you to easily copy, alter or clear the text.

    • The list of permissions now has a specific custom order in the UI, as follows.

      • Organization:

        1. Organization settings

        2. Repository and view management

        3. Permissions and user management

        4. Fleet management

        5. Query monitoring

        6. Other

      • Cluster management:

        1. Cluster management

        2. Organization management

        3. Subdomains

        4. Others

    • A combined view of permissions is now available to show all roles listed together when there is more than one role under each repository, organization, or system.

      For more information, see Aggregate Permissions.

    • It is now possible to highlight results based on the filters applied in queries. This helps significantly when trying to understand why a query matches the results or when looking for a specific part of the events text.

      For more information, see Filter Match Highlighting.

  • Automation and Alerts

    • The new button Import from has been added to the Scheduled Searches form allowing importing a Scheduled Search from template or package.

    • When creating or updating Scheduled Searches using the GraphQL API, it is now possible to refer to actions in Packages using a qualified name of \"packagescope/packagename:actionname\". Actions in packages will no longer be found if using an unqualified name.

    • When generating CSV files for attaching to emails or uploading to LogScale in actions, or when using the message template {events_html}, the field @ingesttimestamp is now formatted similar to how @timestamp is.

    • The UI flow for Scheduled Searches has been updated: when you click on New Scheduled Search it will directly go to the New Scheduled Search form.

    • The Alert forms will not show any errors when the alert is disabled.

  • GraphQL API

    • Added limits for GraphQL queries on the total number of selected fields and fragments. Defaults are 1000 for authenticated and 150 for unauthenticated users.

      Cluster administrators can adjust these limits with the GraphQLSelectionSizeLimit and UnauthenticatedGraphQLSelectionSizeLimit dynamic configurations.

    • The contentHash field on the File output type has been reintroduced.

  • Storage

    • JVM_TMP_DIR has been added to the launcher script. This option is used for configuring java.io.tmpdir and jna.tmpdir for the JVM. The directory will default to jvm-tmp inside the directory specified by the DIRECTORY setting. This default should alleviate issues starting LogScale on some systems due to the /tmp directory being marked as noexec.

      For more information, see Troubleshooting: Error Starting LogScale due to Exec permissions on /tmp.

    • Bucket storage cleaning of tmp files now only runs on a few nodes in the cluster rather than on all nodes.

  • Configuration

    • LOCAL_STORAGE_PREFILL_PERCENTAGE new configuration option has been added.

      For more information, see LOCAL_STORAGE_PREFILL_PERCENTAGE.

    • Query queueing based on the available memory in query coordinator is enabled by default by treating dynamic configuration QueryCoordinatorMaxHeapFraction as 0.5, if it has not been set. To disable queing, set QueryCoordinatorMaxHeapFraction to 1000.

    • Set the default value of LOCAL_STORAGE_PERCENTAGE to 85, and the minimum value to 0. The default was previously to leave this unset, which is not safe in clusters where bucket storage contains more data than will fit on local drives.

    • The new environment variable DISABLE_BUCKET_CLEANING_TMP_FILES has been introduced. It allows to reduce the amount of listing of tmp files in bucket.

  • Dashboards and Widgets

    • You can enable the export of Dashboards to a PDF file, with many options available to control the output layout and formatting.

      The feature is available to all users who already have access to dashboard data. This is the first of two feature releases, aiming to provide full schedulable PDF reporting capabilities to LogScale.

      For more information, see Export Dashboards as PDF.

    • The new Gauge widget is introduced: it allows you to represent values on a fixed scale, offering a visual and intuitive way to monitor key performance metrics.

      For more information, see Gauge Widget.

    • A parameter configuration option has been added to support invalidation of parameter inputs. The format for this is a comma separated list of invalid input patterns (regexes).

    • Introduced a new style option Show 'Others' to the Time Chart Widget: it allows you to show/hide other series when there are more series than the maximum allowed in the chart.

    • A parameter configuration option has been added to allow setting a custom message when a parameter input is invalid.

    • New formatting options have been introduced for the Table widget, to get actionable insights from your data faster:

      • Conditional formatting of table cells

      • Text wrapping and column resizing

      • Row numbering

      • Number formatting

      • Link formatting

      • Columns hiding

      For more information, see Table Widget.

  • Ingestion

    • When writing parsers, the fields produced by a test case are now available for autocompletion in the editor.

      For more information, see Using the Parser Code Editor.

  • Log Collector

    • The Fleet Management tab on Fleet Overview page is now renamed to Data Ingest.

  • Functions

  • Packages

    • Filter alerts and Standard alerts are now shown in the same tab Alerts under Assets when installing or viewing installed Packages.

    • It is now possible to see the type of action in Packages (Marketplace, Installed and Create a package).

Fixed in this release

  • UI Changes

    • Queries could "flicker" for a short period causing "negative alerts" to trigger for no reason (negative alerts are alerts that check for the absence of events). This issue has been fixed.

    • The following issue has been fixed on the Search page: if regular expressions contained named groups with special characters (underscore _ for example) a recent change with the introduction of Filter Match Highlighting would cause a server error and hang the UI.

    • The following items about Saving Queries have been fixed:

      • The Search... field for saved queries did not return what would be expected.

      • Upon reopening the Queries dropdown after having filled out the Search... field, the text would still be present in the Search... field but not filter on the queries.

      • Added focus on the Search... field when reopening the Queries dropdown.

  • Automation and Alerts

    • Notifications on problems with Filter Alerts where not automatically removed when the problem was solved. This issue is now fixed.

    • Filter alerts that could fail right after a cluster restart have now been fixed.

    • When used with Filter Alerts, the {events_html} message template would not keep the order of the fields from the Alert query.

  • GraphQL API

    • When trying to delete an Alert, Scheduled Search or Dashboard using a mutation for one of the other types, it would end up in a state where it was not deleted, but could not run either. This issue is now fixed.

  • Storage

    • A workaround solution has been identified for those cases where segment files on local disk no longer pass their internal checksum test and are detected as "broken" by the background merge process.

      1. Ensure a copy of the local file is present in the bucket storage, backing up the cluster

      2. Delete the local copy

      As a result, any merge attempt involving that file will succeed after the next restart of LogScale.

  • Dashboards and Widgets

    • Field values containing % would not be resolved correctly in interactions. This issue has been fixed.

  • Ingestion

    • The buttons used for editing and deleting an ingest listener were overlapping in Safari on the Ingest Listeners page under a repository. This issue has been fixed.

  • Functions

    • Results for empty buckets didn't include the steps after the first aggregator of the subquery. This issue has now been fixed.

    • match() function using a json file and containing an object with a missing field, could lead to an internal error.

    • The regex() function has been fixed for cases where \Q...\E could cause problems for named capturing groups.

    • The array:filter() function has been fixed for an issue that caused incorrect output element values in certain circumstances.

  • Other

    • A cluster with very little disk space left could result in excessive logging from com.humio.distribution.RendezvousSegmentDistribution.

    • Fixing a race that can leave a query in a state where it will cause an excessive amount of 404 HTTP requests. This adds unnecessary noise and a bit of extra load to the system.

    • A minor logging issue has been fixed: ClusterHostAliveStats would log that hosts were "changed from being considered dead to alive" on hosts that had just rebooted, when such hosts actually consider all other nodes alive for a little while, to allow the booting node some time to hear heartbeats from others.

    • A boot-time version checking issue could cause LogScale to crash on boot, if joining a fresh cluster, and the first node to join that cluster would crash.

  • Packages

    • Updating of a Package failed when using anything other than a personal user token. This issue has been fixed.

    • Updating a package with a lookup file and a parser/scheduled search/filter alert/alert containing match would fail if the new column parameter did not exist in the old lookup file. This issue has now been fixed.

    • Aligned the requirements to allow all tokens (with the correct permissions) to install and update Packages.

    • Fixed a broken link from saved query asset in Packages to Search page.

    • The alert types in Package Marketplace were showing twice — this is now fixed so it properly shows one type as expected.

Falcon LogScale 1.112.1 LTS (2023-11-15)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.112.1LTS2023-11-15

Cloud

2024-11-30No1.70.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.112.1/server-1.112.1.tar.gz

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.

Removed

Items that have been removed as of this release.

Installation and Deployment

  • All ZooKeeper-related functionality for LogScale was deprecated in December 2022, and is now removed:

    • Removed the ZooKeeper status page from the User Interface

    • Removed the ZooKeeper related GraphQL mutations

    • Removed the migration support for node IDs created by ZooKeeper, as we no longer support upgrading from version prior to 1.70.

    Depending on your chosen Kafka deployment, ZooKeeper may still be required to support Kafka.

  • Running on Java 11, 12, 13, 14, 15 and 16 is no longer supported. The minimum supported Java version is 17 starting from this LogScale release.

GraphQL API

  • The deprecated client mutation ID concept is now being removed from the GraphQL API:

    • Removed the clientMutationId argument for a lot of mutations.

    • Removed the clientMutationId field from the returned type for a lot of mutations.

    • Renamed the ClientMutationID datatype, that was returned from some mutations to BooleanResultType datatype. Removed the clientMutationId field on the returned type and replaced it by a boolean field named result.

  • Most deprecated queries, mutations and fields have now been removed from the GraphQL API.

Storage

  • The unused humio-backup symlink inside Docker containers has been removed.

Configuration

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The following REST endpoints for deleting events have been deprecated:

    • /api/v1/dataspaces/(Id)/deleteevents

    • /api/v1/repositories/(id)/deleteevents

    The new GraphQL mutation redactEvents should be used instead.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Automation and Alerts

    • We have changed how Standard Alerts handle query warnings. Previously, LogScale only triggered alerts if there were no query warnings. Now, alerts will trigger despite most query warnings, and the alert status will show a warning instead of an error. Up until now, all query warnings were treated as errors. This meant that the alert did not trigger even though it produced results, and the alert was shown with an error in LogScale. Most query warnings mean that not all data was queried. The previous behaviour prevented the alert from triggering in cases where it would not have, if all data had been available. For instance, an alert that would trigger if a count of events dropped below a threshold. On the other hand, it made some alerts not trigger, even though they would still have if all data was available. That meant that previously you would almost never get an alert that you should not have gotten, but you would sometime not get an alert that you should have gotten. We have reverted this. With this change, we no longer recommend to set the configuration option ALERT_DESPITE_WARNINGS to true, since it treats all query warnings as non-errors, and there are a few query warnings that should make the alert fail.

      For more information, see Diagnosing Alerts.

Upgrades

Changes that may occur or be required during an upgrade.

  • Security

  • Configuration

    • Docker containers have been upgraded to Java 21.

New features and improvements

  • Installation and Deployment

    • Configure LogScale to write fatal JVM error logs in the JVM logging directory, which is specified using JVM_LOG_DIR variable. The default directory is /logs/humio.

  • UI Changes

    • Most tables inside the LogScale UI now supports resizing columns, except the Table widget used during search.

    • The behavior of the ComboBox has changed: the drop-down is not filtered until the text in the filter field has been edited, allowing you to easily copy, alter or clear the text.

    • The list of permissions now has a specific custom order in the UI, as follows.

      • Organization:

        1. Organization settings

        2. Repository and view management

        3. Permissions and user management

        4. Fleet management

        5. Query monitoring

        6. Other

      • Cluster management:

        1. Cluster management

        2. Organization management

        3. Subdomains

        4. Others

    • A combined view of permissions is now available to show all roles listed together when there is more than one role under each repository, organization, or system.

      For more information, see Aggregate Permissions.

    • It is now possible to highlight results based on the filters applied in queries. This helps significantly when trying to understand why a query matches the results or when looking for a specific part of the events text.

      For more information, see Filter Match Highlighting.

  • Automation and Alerts

    • The new button Import from has been added to the Scheduled Searches form allowing importing a Scheduled Search from template or package.

    • When creating or updating Scheduled Searches using the GraphQL API, it is now possible to refer to actions in Packages using a qualified name of \"packagescope/packagename:actionname\". Actions in packages will no longer be found if using an unqualified name.

    • When generating CSV files for attaching to emails or uploading to LogScale in actions, or when using the message template {events_html}, the field @ingesttimestamp is now formatted similar to how @timestamp is.

    • The UI flow for Scheduled Searches has been updated: when you click on New Scheduled Search it will directly go to the New Scheduled Search form.

    • The Alert forms will not show any errors when the alert is disabled.

  • GraphQL API

    • The contentHash field on the File output type has been reintroduced.

  • Storage

    • JVM_TMP_DIR has been added to the launcher script. This option is used for configuring java.io.tmpdir and jna.tmpdir for the JVM. The directory will default to jvm-tmp inside the directory specified by the DIRECTORY setting. This default should alleviate issues starting LogScale on some systems due to the /tmp directory being marked as noexec.

      For more information, see Troubleshooting: Error Starting LogScale due to Exec permissions on /tmp.

    • Bucket storage cleaning of tmp files now only runs on a few nodes in the cluster rather than on all nodes.

  • Configuration

    • LOCAL_STORAGE_PREFILL_PERCENTAGE new configuration option has been added.

      For more information, see LOCAL_STORAGE_PREFILL_PERCENTAGE.

    • Query queueing based on the available memory in query coordinator is enabled by default by treating dynamic configuration QueryCoordinatorMaxHeapFraction as 0.5, if it has not been set. To disable queing, set QueryCoordinatorMaxHeapFraction to 1000.

    • Set the default value of LOCAL_STORAGE_PERCENTAGE to 85, and the minimum value to 0. The default was previously to leave this unset, which is not safe in clusters where bucket storage contains more data than will fit on local drives.

    • The new environment variable DISABLE_BUCKET_CLEANING_TMP_FILES has been introduced. It allows to reduce the amount of listing of tmp files in bucket.

  • Dashboards and Widgets

    • You can enable the export of Dashboards to a PDF file, with many options available to control the output layout and formatting.

      The feature is available to all users who already have access to dashboard data. This is the first of two feature releases, aiming to provide full schedulable PDF reporting capabilities to LogScale.

      For more information, see Export Dashboards as PDF.

    • The new Gauge widget is introduced: it allows you to represent values on a fixed scale, offering a visual and intuitive way to monitor key performance metrics.

      For more information, see Gauge Widget.

    • A parameter configuration option has been added to support invalidation of parameter inputs. The format for this is a comma separated list of invalid input patterns (regexes).

    • Introduced a new style option Show 'Others' to the Time Chart Widget: it allows you to show/hide other series when there are more series than the maximum allowed in the chart.

    • A parameter configuration option has been added to allow setting a custom message when a parameter input is invalid.

    • New formatting options have been introduced for the Table widget, to get actionable insights from your data faster:

      • Conditional formatting of table cells

      • Text wrapping and column resizing

      • Row numbering

      • Number formatting

      • Link formatting

      • Columns hiding

      For more information, see Table Widget.

  • Ingestion

    • When writing parsers, the fields produced by a test case are now available for autocompletion in the editor.

      For more information, see Using the Parser Code Editor.

  • Log Collector

    • The Fleet Management tab on Fleet Overview page is now renamed to Data Ingest.

  • Functions

  • Packages

    • Filter alerts and Standard alerts are now shown in the same tab Alerts under Assets when installing or viewing installed Packages.

    • It is now possible to see the type of action in Packages (Marketplace, Installed and Create a package).

Fixed in this release

  • UI Changes

    • Queries could "flicker" for a short period causing "negative alerts" to trigger for no reason (negative alerts are alerts that check for the absence of events). This issue has been fixed.

    • The following issue has been fixed on the Search page: if regular expressions contained named groups with special characters (underscore _ for example) a recent change with the introduction of Filter Match Highlighting would cause a server error and hang the UI.

    • The following items about Saving Queries have been fixed:

      • The Search... field for saved queries did not return what would be expected.

      • Upon reopening the Queries dropdown after having filled out the Search... field, the text would still be present in the Search... field but not filter on the queries.

      • Added focus on the Search... field when reopening the Queries dropdown.

  • Automation and Alerts

    • Notifications on problems with Filter Alerts where not automatically removed when the problem was solved. This issue is now fixed.

    • Filter alerts that could fail right after a cluster restart have now been fixed.

    • When used with Filter Alerts, the {events_html} message template would not keep the order of the fields from the Alert query.

  • GraphQL API

    • When trying to delete an Alert, Scheduled Search or Dashboard using a mutation for one of the other types, it would end up in a state where it was not deleted, but could not run either. This issue is now fixed.

  • Storage

    • A workaround solution has been identified for those cases where segment files on local disk no longer pass their internal checksum test and are detected as "broken" by the background merge process.

      1. Ensure a copy of the local file is present in the bucket storage, backing up the cluster

      2. Delete the local copy

      As a result, any merge attempt involving that file will succeed after the next restart of LogScale.

  • Dashboards and Widgets

    • Field values containing % would not be resolved correctly in interactions. This issue has been fixed.

  • Ingestion

    • The buttons used for editing and deleting an ingest listener were overlapping in Safari on the Ingest Listeners page under a repository. This issue has been fixed.

  • Functions

    • Results for empty buckets didn't include the steps after the first aggregator of the subquery. This issue has now been fixed.

    • match() function using a json file and containing an object with a missing field, could lead to an internal error.

    • The regex() function has been fixed for cases where \Q...\E could cause problems for named capturing groups.

    • The array:filter() function has been fixed for an issue that caused incorrect output element values in certain circumstances.

  • Other

    • A cluster with very little disk space left could result in excessive logging from com.humio.distribution.RendezvousSegmentDistribution.

    • Fixing a race that can leave a query in a state where it will cause an excessive amount of 404 HTTP requests. This adds unnecessary noise and a bit of extra load to the system.

    • A minor logging issue has been fixed: ClusterHostAliveStats would log that hosts were "changed from being considered dead to alive" on hosts that had just rebooted, when such hosts actually consider all other nodes alive for a little while, to allow the booting node some time to hear heartbeats from others.

    • A boot-time version checking issue could cause LogScale to crash on boot, if joining a fresh cluster, and the first node to join that cluster would crash.

  • Packages

    • Updating of a Package failed when using anything other than a personal user token. This issue has been fixed.

    • Updating a package with a lookup file and a parser/scheduled search/filter alert/alert containing match would fail if the new column parameter did not exist in the old lookup file. This issue has now been fixed.

    • Aligned the requirements to allow all tokens (with the correct permissions) to install and update Packages.

    • Fixed a broken link from saved query asset in Packages to Search page.

    • The alert types in Package Marketplace were showing twice — this is now fixed so it properly shows one type as expected.

Falcon LogScale 1.112.0 GA (2023-10-24)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.112.0GA2023-10-24

Cloud

2024-11-30No1.70.0No

Available for download two days after release.

Bug fixes and updates.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Automation and Alerts

    • We have changed how Standard Alerts handle query warnings. Previously, LogScale only triggered alerts if there were no query warnings. Now, alerts will trigger despite most query warnings, and the alert status will show a warning instead of an error. Up until now, all query warnings were treated as errors. This meant that the alert did not trigger even though it produced results, and the alert was shown with an error in LogScale. Most query warnings mean that not all data was queried. The previous behaviour prevented the alert from triggering in cases where it would not have, if all data had been available. For instance, an alert that would trigger if a count of events dropped below a threshold. On the other hand, it made some alerts not trigger, even though they would still have if all data was available. That meant that previously you would almost never get an alert that you should not have gotten, but you would sometime not get an alert that you should have gotten. We have reverted this. With this change, we no longer recommend to set the configuration option ALERT_DESPITE_WARNINGS to true, since it treats all query warnings as non-errors, and there are a few query warnings that should make the alert fail.

      For more information, see Diagnosing Alerts.

Upgrades

Changes that may occur or be required during an upgrade.

  • Storage

    • This release introduces a change to the internal storage format use for sharing global data. Once upgraded to v1.112 or higher it will not be possible to downgrade to a version lower than 1.112.

New features and improvements

  • Installation and Deployment

    • Configure LogScale to write fatal JVM error logs in the JVM logging directory, which is specified using JVM_LOG_DIR variable. The default directory is /logs/humio.

  • UI Changes

    • The behavior of the ComboBox has changed: the drop-down is not filtered until the text in the filter field has been edited, allowing you to easily copy, alter or clear the text.

    • The list of permissions now has a specific custom order in the UI, as follows.

      • Organization:

        1. Organization settings

        2. Repository and view management

        3. Permissions and user management

        4. Fleet management

        5. Query monitoring

        6. Other

      • Cluster management:

        1. Cluster management

        2. Organization management

        3. Subdomains

        4. Others

    • A combined view of permissions is now available to show all roles listed together when there is more than one role under each repository, organization, or system.

      For more information, see Aggregate Permissions.

  • Automation and Alerts

    • The Alert forms will not show any errors when the alert is disabled.

  • Dashboards and Widgets

    • You can enable the export of Dashboards to a PDF file, with many options available to control the output layout and formatting.

      The feature is available to all users who already have access to dashboard data. This is the first of two feature releases, aiming to provide full schedulable PDF reporting capabilities to LogScale.

      For more information, see Export Dashboards as PDF.

    • The new Gauge widget is introduced: it allows you to represent values on a fixed scale, offering a visual and intuitive way to monitor key performance metrics.

      For more information, see Gauge Widget.

Fixed in this release

  • UI Changes

    • Time Selector and date picker in the Time Interval panel have been fixed for issues related to daylight savings time.

    • Queries could "flicker" for a short period causing "negative alerts" to trigger for no reason (negative alerts are alerts that check for the absence of events). This issue has been fixed.

  • Automation and Alerts

    • Notifications on problems with Filter Alerts where not automatically removed when the problem was solved. This issue is now fixed.

  • GraphQL API

    • When trying to delete an Alert, Scheduled Search or Dashboard using a mutation for one of the other types, it would end up in a state where it was not deleted, but could not run either. This issue is now fixed.

  • Other

    • A minor logging issue has been fixed: ClusterHostAliveStats would log that hosts were "changed from being considered dead to alive" on hosts that had just rebooted, when such hosts actually consider all other nodes alive for a little while, to allow the booting node some time to hear heartbeats from others.

  • Packages

    • The alert types in Package Marketplace were showing twice — this is now fixed so it properly shows one type as expected.

Falcon LogScale 1.111.1 GA (2023-10-28)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.111.1GA2023-10-28

Cloud

2024-11-30No1.70.0No

Available for download two days after release.

Bug fixes and updates.

Fixed in this release

  • UI Changes

    • Time Selector and date picker in the Time Interval panel have been fixed for issues related to daylight savings time.

Falcon LogScale 1.111.0 GA (2023-10-10)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.111.0GA2023-10-10

Cloud

2024-11-30No1.70.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Automation and Alerts

    • In LogScale version 1.112 we will change how standard alerts handle query warnings. Currently, LogScale will only trigger alerts if there are no query warnings. Starting with upcoming 1.112, alerts will trigger despite most query warnings, and the alert status will show a warning instead of an error.

      Up until now, all query warnings have been treated as errors. This means that the alert does not trigger even though it produces results, and the alert is shown with an error in LogScale. Most query warnings mean that not all data was queried. The current behaviour prevents the alert from triggering in cases where it would not have, if all data had been available. For instance, an alert that would trigger if a count of events dropped below a threshold. On the other hand, it makes some alerts not trigger, even though they would still have if all data was available. That means that currently you will almost never get an alert that you should not have gotten, but you will sometime not get an alert that you should have gotten. We plan to revert this.

      When this change happens, we no longer recommend to set the configuration option ALERT_DESPITE_WARNINGS to true, since it treats all query warnings as non-errors, and there are a few query warnings that should make the alert fail.

Removed

Items that have been removed as of this release.

Storage

  • The unused humio-backup symlink inside Docker containers has been removed.

Configuration

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The following REST endpoints for deleting events have been deprecated:

    • /api/v1/dataspaces/(Id)/deleteevents

    • /api/v1/repositories/(id)/deleteevents

    The new GraphQL mutation redactEvents should be used instead.

New features and improvements

  • Storage

    • JVM_TMP_DIR has been added to the launcher script. This option is used for configuring java.io.tmpdir and jna.tmpdir for the JVM. The directory will default to jvm-tmp inside the directory specified by the DIRECTORY setting. This default should alleviate issues starting LogScale on some systems due to the /tmp directory being marked as noexec.

      For more information, see Troubleshooting: Error Starting LogScale due to Exec permissions on /tmp.

    • Bucket storage cleaning of tmp files now only runs on a few nodes in the cluster rather than on all nodes.

  • Configuration

  • Dashboards and Widgets

    • New formatting options have been introduced for the Table widget, to get actionable insights from your data faster:

      • Conditional formatting of table cells

      • Text wrapping and column resizing

      • Row numbering

      • Number formatting

      • Link formatting

      • Columns hiding

      For more information, see Table Widget.

  • Ingestion

    • When writing parsers, the fields produced by a test case are now available for autocompletion in the editor.

      For more information, see Using the Parser Code Editor.

  • Functions

Fixed in this release

  • UI Changes

    • The following issue has been fixed on the Search page: if regular expressions contained named groups with special characters (underscore _ for example) a recent change with the introduction of Filter Match Highlighting would cause a server error and hang the UI.

    • The following items about Saving Queries have been fixed:

      • The Search... field for saved queries did not return what would be expected.

      • Upon reopening the Queries dropdown after having filled out the Search... field, the text would still be present in the Search... field but not filter on the queries.

      • Added focus on the Search... field when reopening the Queries dropdown.

  • Automation and Alerts

  • Dashboards and Widgets

    • Field values containing % would not be resolved correctly in interactions. This issue has been fixed.

  • Functions

    • Results for empty buckets didn't include the steps after the first aggregator of the subquery. This issue has now been fixed.

  • Packages

    • Updating of a Package failed when using anything other than a personal user token. This issue has been fixed.

    • Aligned the requirements to allow all tokens (with the correct permissions) to install and update Packages.

Falcon LogScale 1.110.1 GA (2023-10-28)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.110.1GA2023-10-28

Cloud

2024-11-30No1.70.0No

Available for download two days after release.

Bug fixes and updates.

Fixed in this release

  • UI Changes

    • Time Selector and date picker in the Time Interval panel have been fixed for issues related to daylight savings time.

Falcon LogScale 1.110.0 GA (2023-10-03)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.110.0GA2023-10-03

Cloud

2024-11-30No1.70.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Automation and Alerts

    • In LogScale version 1.112 we will change how standard alerts handle query warnings. Currently, LogScale will only trigger alerts if there are no query warnings. Starting with upcoming 1.112, alerts will trigger despite most query warnings, and the alert status will show a warning instead of an error.

      Up until now, all query warnings have been treated as errors. This means that the alert does not trigger even though it produces results, and the alert is shown with an error in LogScale. Most query warnings mean that not all data was queried. The current behaviour prevents the alert from triggering in cases where it would not have, if all data had been available. For instance, an alert that would trigger if a count of events dropped below a threshold. On the other hand, it makes some alerts not trigger, even though they would still have if all data was available. That means that currently you will almost never get an alert that you should not have gotten, but you will sometime not get an alert that you should have gotten. We plan to revert this.

      When this change happens, we no longer recommend to set the configuration option ALERT_DESPITE_WARNINGS to true, since it treats all query warnings as non-errors, and there are a few query warnings that should make the alert fail.

New features and improvements

  • GraphQL API

    • The contentHash field on the File output type has been reintroduced.

  • Dashboards and Widgets

    • A parameter configuration option has been added to support invalidation of parameter inputs. The format for this is a comma separated list of invalid input patterns (regexes).

    • A parameter configuration option has been added to allow setting a custom message when a parameter input is invalid.

  • Packages

    • Filter alerts and Standard alerts are now shown in the same tab Alerts under Assets when installing or viewing installed Packages.

    • It is now possible to see the type of action in Packages (Marketplace, Installed and Create a package).

Fixed in this release

  • Storage

    • A workaround solution has been identified for those cases where segment files on local disk no longer pass their internal checksum test and are detected as "broken" by the background merge process.

      1. Ensure a copy of the local file is present in the bucket storage, backing up the cluster

      2. Delete the local copy

      As a result, any merge attempt involving that file will succeed after the next restart of LogScale.

  • Ingestion

    • The buttons used for editing and deleting an ingest listener were overlapping in Safari on the Ingest Listeners page under a repository. This issue has been fixed.

  • Functions

    • The regex() function has been fixed for cases where \Q...\E could cause problems for named capturing groups.

    • The array:filter() function has been fixed for an issue that caused incorrect output element values in certain circumstances.

  • Other

    • A boot-time version checking issue could cause LogScale to crash on boot, if joining a fresh cluster, and the first node to join that cluster would crash.

  • Packages

    • Fixed a broken link from saved query asset in Packages to Search page.

Falcon LogScale 1.109.1 GA (2023-10-28)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.109.1GA2023-10-28

Cloud

2024-11-30No1.70.0No

Available for download two days after release.

Bug fixes and updates.

Fixed in this release

  • UI Changes

    • Time Selector and date picker in the Time Interval panel have been fixed for issues related to daylight savings time.

Falcon LogScale 1.109.0 GA (2023-09-26)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.109.0GA2023-09-26

Cloud

2024-11-30No1.70.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Automation and Alerts

    • In LogScale version 1.112 we will change how standard alerts handle query warnings. Currently, LogScale will only trigger alerts if there are no query warnings. Starting with upcoming 1.112, alerts will trigger despite most query warnings, and the alert status will show a warning instead of an error.

      Up until now, all query warnings have been treated as errors. This means that the alert does not trigger even though it produces results, and the alert is shown with an error in LogScale. Most query warnings mean that not all data was queried. The current behaviour prevents the alert from triggering in cases where it would not have, if all data had been available. For instance, an alert that would trigger if a count of events dropped below a threshold. On the other hand, it makes some alerts not trigger, even though they would still have if all data was available. That means that currently you will almost never get an alert that you should not have gotten, but you will sometime not get an alert that you should have gotten. We plan to revert this.

      When this change happens, we no longer recommend to set the configuration option ALERT_DESPITE_WARNINGS to true, since it treats all query warnings as non-errors, and there are a few query warnings that should make the alert fail.

Upgrades

Changes that may occur or be required during an upgrade.

  • Configuration

    • Docker containers have been upgraded to Java 21.

New features and improvements

  • Automation and Alerts

    • The new button Import from has been added to the Scheduled Searches form allowing importing a Scheduled Search from template or package.

    • When creating or updating Scheduled Searches using the GraphQL API, it is now possible to refer to actions in Packages using a qualified name of \"packagescope/packagename:actionname\". Actions in packages will no longer be found if using an unqualified name.

    • When generating CSV files for attaching to emails or uploading to LogScale in actions, or when using the message template {events_html}, the field @ingesttimestamp is now formatted similar to how @timestamp is.

    • The UI flow for Scheduled Searches has been updated: when you click on New Scheduled Search it will directly go to the New Scheduled Search form.

  • Configuration

  • Log Collector

    • The Fleet Management tab on Fleet Overview page is now renamed to Data Ingest.

  • Functions

Fixed in this release

  • Automation and Alerts

    • Filter alerts that could fail right after a cluster restart have now been fixed.

  • Other

    • A cluster with very little disk space left could result in excessive logging from com.humio.distribution.RendezvousSegmentDistribution.

  • Packages

    • Updating a package with a lookup file and a parser/scheduled search/filter alert/alert containing match would fail if the new column parameter did not exist in the old lookup file. This issue has now been fixed.

Falcon LogScale 1.108.0 GA (2023-09-19)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.108.0GA2023-09-19

Cloud

2024-11-30No1.70.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Automation and Alerts

    • In LogScale version 1.112 we will change how standard alerts handle query warnings. Currently, LogScale will only trigger alerts if there are no query warnings. Starting with upcoming 1.112, alerts will trigger despite most query warnings, and the alert status will show a warning instead of an error.

      Up until now, all query warnings have been treated as errors. This means that the alert does not trigger even though it produces results, and the alert is shown with an error in LogScale. Most query warnings mean that not all data was queried. The current behaviour prevents the alert from triggering in cases where it would not have, if all data had been available. For instance, an alert that would trigger if a count of events dropped below a threshold. On the other hand, it makes some alerts not trigger, even though they would still have if all data was available. That means that currently you will almost never get an alert that you should not have gotten, but you will sometime not get an alert that you should have gotten. We plan to revert this.

      When this change happens, we no longer recommend to set the configuration option ALERT_DESPITE_WARNINGS to true, since it treats all query warnings as non-errors, and there are a few query warnings that should make the alert fail.

Removed

Items that have been removed as of this release.

Installation and Deployment

  • All ZooKeeper-related functionality for LogScale was deprecated in December 2022, and is now removed:

    • Removed the ZooKeeper status page from the User Interface

    • Removed the ZooKeeper related GraphQL mutations

    • Removed the migration support for node IDs created by ZooKeeper, as we no longer support upgrading from version prior to 1.70.

    Depending on your chosen Kafka deployment, ZooKeeper may still be required to support Kafka.

GraphQL API

  • The deprecated client mutation ID concept is now being removed from the GraphQL API:

    • Removed the clientMutationId argument for a lot of mutations.

    • Removed the clientMutationId field from the returned type for a lot of mutations.

    • Renamed the ClientMutationID datatype, that was returned from some mutations to BooleanResultType datatype. Removed the clientMutationId field on the returned type and replaced it by a boolean field named result.

  • Most deprecated queries, mutations and fields have now been removed from the GraphQL API.

New features and improvements

  • Installation and Deployment

    • The following adjustments have been made to the launcher script:

      • Removed UnlockDiagnosticVMOptions

      • Raised default heap size to 75% of host memory, up from 50%

      • Move -XX:CompileCommand settings into the mandatory launch options, to prevent accidentally removing them when customizing HUMIO_JVM_PERFORMANCE_OPTS

      • Set -XX:MaxDirectMemorySize to 1/5GB per CPU core as a default.

      • Print a warning if the sum of the heap size and the direct memory setting exceeds the total available memory.

  • Configuration

    • Query queueing based on the available memory in query coordinator is enabled by default by treating dynamic configuration QueryCoordinatorMaxHeapFraction as 0.5, if it has not been set. To disable queing, set QueryCoordinatorMaxHeapFraction to 1000.

  • Dashboards and Widgets

    • Introduced a new style option Show 'Others' to the Time Chart Widget: it allows you to show/hide other series when there are more series than the maximum allowed in the chart.

Fixed in this release

  • Functions

    • Fixed a bug where join() queries could result in a memory leak from their sub queries not being properly cleaned up.

Falcon LogScale 1.107.0 GA (2023-09-12)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.107.0GA2023-09-12

Cloud

2024-11-30No1.70.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.

  • Automation and Alerts

    • In LogScale version 1.112 we will change how standard alerts handle query warnings. Currently, LogScale will only trigger alerts if there are no query warnings. Starting with upcoming 1.112, alerts will trigger despite most query warnings, and the alert status will show a warning instead of an error.

      Up until now, all query warnings have been treated as errors. This means that the alert does not trigger even though it produces results, and the alert is shown with an error in LogScale. Most query warnings mean that not all data was queried. The current behaviour prevents the alert from triggering in cases where it would not have, if all data had been available. For instance, an alert that would trigger if a count of events dropped below a threshold. On the other hand, it makes some alerts not trigger, even though they would still have if all data was available. That means that currently you will almost never get an alert that you should not have gotten, but you will sometime not get an alert that you should have gotten. We plan to revert this.

      When this change happens, we no longer recommend to set the configuration option ALERT_DESPITE_WARNINGS to true, since it treats all query warnings as non-errors, and there are a few query warnings that should make the alert fail.

Removed

Items that have been removed as of this release.

Installation and Deployment

  • Running on Java 11, 12, 13, 14, 15 and 16 is no longer supported. The minimum supported Java version is 17 starting from this LogScale release.

New features and improvements

  • UI Changes

    • Most tables inside the LogScale UI now supports resizing columns, except the Table widget used during search.

    • It is now possible to highlight results based on the filters applied in queries. This helps significantly when trying to understand why a query matches the results or when looking for a specific part of the events text.

      For more information, see Filter Match Highlighting.

  • Configuration

Fixed in this release

  • Functions

    • match() function using a json file and containing an object with a missing field, could lead to an internal error.

Falcon LogScale 1.106.6 LTS (2024-01-22)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.106.6LTS2024-01-22

Cloud

2024-09-30No1.70.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.106.6/server-1.106.6.tar.gz

These notes include entries from the following previous releases: 1.106.2, 1.106.4, 1.106.5

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.

  • Automation and Alerts

    • In LogScale version 1.112 we will change how standard alerts handle query warnings. Currently, LogScale will only trigger alerts if there are no query warnings. Starting with upcoming 1.112, alerts will trigger despite most query warnings, and the alert status will show a warning instead of an error.

      Up until now, all query warnings have been treated as errors. This means that the alert does not trigger even though it produces results, and the alert is shown with an error in LogScale. Most query warnings mean that not all data was queried. The current behaviour prevents the alert from triggering in cases where it would not have, if all data had been available. For instance, an alert that would trigger if a count of events dropped below a threshold. On the other hand, it makes some alerts not trigger, even though they would still have if all data was available. That means that currently you will almost never get an alert that you should not have gotten, but you will sometime not get an alert that you should have gotten. We plan to revert this.

      When this change happens, we no longer recommend to set the configuration option ALERT_DESPITE_WARNINGS to true, since it treats all query warnings as non-errors, and there are a few query warnings that should make the alert fail.

Upgrades

Changes that may occur or be required during an upgrade.

  • Security

New features and improvements

  • Installation and Deployment

    • The following adjustments have been made to the launcher script:

      • Removed UnlockDiagnosticVMOptions

      • Raised default heap size to 75% of host memory, up from 50%

      • Move -XX:CompileCommand settings into the mandatory launch options, to prevent accidentally removing them when customizing HUMIO_JVM_PERFORMANCE_OPTS

      • Set -XX:MaxDirectMemorySize to 1/5GB per CPU core as a default.

      • Print a warning if the sum of the heap size and the direct memory setting exceeds the total available memory.

  • UI Changes

    • The Show in context dialog now closes when the Search button in the dialog is clicked.

    • The fields and values in the Fields Panel and in the Event List are now sorted case-insensitively.

  • Automation and Alerts

    • It is now possible to import and export Filter Alerts in Packages from the UI.

    • When creating or updating Filter Alerts using the GraphQL API, it is now possible to refer to actions in Packages using a qualified name of \"packagescope/packagename:actionname\". Actions in packages will no longer be found if using an unqualified name.

    • The UI flow for Alerts has been updated — when you click on New alert you are directly presented with the New alertform.

    • Importing an alert from template or package is done from the new Import from button now located on top of the New alert form.

    • When installing or updating a package with an Alert or Scheduled search referencing an action that is not part of the package, the error is now shown in the UI. Previously, a generic error was shown.

    • Added a status field to some of the logs for Standard Alerts and Filter Alerts as well as Scheduled Searches. The field shows whether the current run of the job resulted in a Success or Failure for the Alert or Scheduled Search.

      For more information, see Monitoring Alert Execution through the humio-activity Repository.

    • When installing a package, all actions referenced by Alerts and Scheduled searches in the package must be contained in the packages. Previously, missing actions were just ignored.

    • It is now possible to create Packages containing Filter Alerts, as well as importing such packages, using the API.

  • GraphQL API

    • Added limits for GraphQL queries on the total number of selected fields and fragments. Defaults are 1000 for authenticated and 150 for unauthenticated users.

      Cluster administrators can adjust these limits with the GraphQLSelectionSizeLimit and UnauthenticatedGraphQLSelectionSizeLimit dynamic configurations.

    • The following GraphQL mutations have been changed so that the actions field can either contain IDs or names of actions:

      • createAlert

      • updateAlert

      • createScheduledSearch

      • updateScheduledSearch

  • Configuration

  • Dashboards and Widgets

    • The text color styling option of the Note Widget is now included when importing a dashboard template or exporting it to a yaml file.

    • Increased to 10,000 the maximum amount of entries suggested in the dropdown of a parameter field of type File Parameter.

  • Ingestion

    • The ability to remove fields when parsing data has been enabled for all users.

      For more information, see Removing Fields.

    • Audit logs for Ingest Tokens now include the ingest token name.

  • Log Collector

    • You can now toggle columns on the instance table, hereby specifying which information should be shown.

    • In Fleet Management, it is now possible to discard the draft of a configuration and rollback to the published version.

      For more information, see Edit a Remote Configuration.

  • Functions

    • The rename() function has been enhanced: it is now possible to rename multiple fields using an array in its field argument. This is backwards compatible with giving separate field and as arguments.

    • The new query function wildcard() is introduced. This function makes it easy to search for case-insensitive patterns on dashboards, or in ad-hoc queries.

    • The new query function crypto:md5() is introduced. This function computes the MD5 hash of a given array of fields.

    • Support for decimal values as exponent and divisor is now added in math:pow() and math:mod() functions respectively.

    • The memory consumption of the formatTime() function has been decreased.

Fixed in this release

  • UI Changes

    • Time Selector and date picker in the Time Interval panel have been fixed for issues related to daylight savings time.

    • The URL would not be updated when selecting a time interval in the distribution chart on the Search page. This issue is now fixed.

  • Automation and Alerts

    • If polling queries were slow, then Scheduled Searches could fire twice. This issue is now fixed.

    • Filter Alerts installed from a package would show up under General and not under the Package name. This issue has been fixed.

    • Falcon LogScale repository actions have now been fixed for cases where they would ingest data into a repository even though ingest was blocked.

    • With Scheduled Searches installed from a package, if you edited the scheduled search and then updated the package, then you would get two copies of the scheduled search. This issue is now fixed.

    • Changes to uploaded files due to a package update would be kept even though the package update failed and other changes were rolled back. This wrong behavior has been fixed.

  • Dashboards and Widgets

    • Queries on a dashboard have been fixed as they would be invalid if the dashboard filter contained a single-line comment.

    • Widgets description tips on dashboards have been fixed as they would not show or have the same text for multiple widgets.

    • If you chose a page size larger than the number of rows, the page number and page size buttons would disappear. The Table widget now always shows the pagination buttons on the Search page where auto page size is turned off. On the dashboard, where auto page size is turned on, the existing behaviour remains.

  • Log Collector

    • Fleet Overview in Fleet Management hangs and doesn't display any data. This behavior has been fixed.

  • Functions

    • Fixed a bug where join() queries could result in a memory leak from their sub queries not being properly cleaned up.

    • The hash() query function would sometimes compute incorrect hashes when the field was formatted in UTF8. This is now fixed.

    • Fixed an issue that could result in cluster performance degradation using join() under certain circumstances.

    • Field names in the query used to export results to CSV had not been quoted correctly: they have now been fixed.

    • The format() function has been fixed as the US date format modifier resulted in the EU date format instead.

  • Other

    • Fixing a race that can leave a query in a state where it will cause an excessive amount of 404 HTTP requests. This adds unnecessary noise and a bit of extra load to the system.

    • The following repository issues have been fixed:

      • After multiple attemps in quick succession to create a repository with the same name, repositories would become inaccessible.

      • Some repositories could only be created partially and would be left as partially initialized in LogScale Internal Architecture used by LogScale.

Falcon LogScale 1.106.5 LTS (2023-11-15)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.106.5LTS2023-11-15

Cloud

2024-09-30No1.70.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.106.5/server-1.106.5.tar.gz

These notes include entries from the following previous releases: 1.106.2, 1.106.4

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.

  • Automation and Alerts

    • In LogScale version 1.112 we will change how standard alerts handle query warnings. Currently, LogScale will only trigger alerts if there are no query warnings. Starting with upcoming 1.112, alerts will trigger despite most query warnings, and the alert status will show a warning instead of an error.

      Up until now, all query warnings have been treated as errors. This means that the alert does not trigger even though it produces results, and the alert is shown with an error in LogScale. Most query warnings mean that not all data was queried. The current behaviour prevents the alert from triggering in cases where it would not have, if all data had been available. For instance, an alert that would trigger if a count of events dropped below a threshold. On the other hand, it makes some alerts not trigger, even though they would still have if all data was available. That means that currently you will almost never get an alert that you should not have gotten, but you will sometime not get an alert that you should have gotten. We plan to revert this.

      When this change happens, we no longer recommend to set the configuration option ALERT_DESPITE_WARNINGS to true, since it treats all query warnings as non-errors, and there are a few query warnings that should make the alert fail.

Upgrades

Changes that may occur or be required during an upgrade.

  • Security

New features and improvements

  • Installation and Deployment

    • The following adjustments have been made to the launcher script:

      • Removed UnlockDiagnosticVMOptions

      • Raised default heap size to 75% of host memory, up from 50%

      • Move -XX:CompileCommand settings into the mandatory launch options, to prevent accidentally removing them when customizing HUMIO_JVM_PERFORMANCE_OPTS

      • Set -XX:MaxDirectMemorySize to 1/5GB per CPU core as a default.

      • Print a warning if the sum of the heap size and the direct memory setting exceeds the total available memory.

  • UI Changes

    • The Show in context dialog now closes when the Search button in the dialog is clicked.

    • The fields and values in the Fields Panel and in the Event List are now sorted case-insensitively.

  • Automation and Alerts

    • It is now possible to import and export Filter Alerts in Packages from the UI.

    • When creating or updating Filter Alerts using the GraphQL API, it is now possible to refer to actions in Packages using a qualified name of \"packagescope/packagename:actionname\". Actions in packages will no longer be found if using an unqualified name.

    • The UI flow for Alerts has been updated — when you click on New alert you are directly presented with the New alertform.

    • Importing an alert from template or package is done from the new Import from button now located on top of the New alert form.

    • When installing or updating a package with an Alert or Scheduled search referencing an action that is not part of the package, the error is now shown in the UI. Previously, a generic error was shown.

    • Added a status field to some of the logs for Standard Alerts and Filter Alerts as well as Scheduled Searches. The field shows whether the current run of the job resulted in a Success or Failure for the Alert or Scheduled Search.

      For more information, see Monitoring Alert Execution through the humio-activity Repository.

    • When installing a package, all actions referenced by Alerts and Scheduled searches in the package must be contained in the packages. Previously, missing actions were just ignored.

    • It is now possible to create Packages containing Filter Alerts, as well as importing such packages, using the API.

  • GraphQL API

    • The following GraphQL mutations have been changed so that the actions field can either contain IDs or names of actions:

      • createAlert

      • updateAlert

      • createScheduledSearch

      • updateScheduledSearch

  • Configuration

  • Dashboards and Widgets

    • The text color styling option of the Note Widget is now included when importing a dashboard template or exporting it to a yaml file.

    • Increased to 10,000 the maximum amount of entries suggested in the dropdown of a parameter field of type File Parameter.

  • Ingestion

    • The ability to remove fields when parsing data has been enabled for all users.

      For more information, see Removing Fields.

    • Audit logs for Ingest Tokens now include the ingest token name.

  • Log Collector

    • You can now toggle columns on the instance table, hereby specifying which information should be shown.

    • In Fleet Management, it is now possible to discard the draft of a configuration and rollback to the published version.

      For more information, see Edit a Remote Configuration.

  • Functions

    • The rename() function has been enhanced: it is now possible to rename multiple fields using an array in its field argument. This is backwards compatible with giving separate field and as arguments.

    • The new query function wildcard() is introduced. This function makes it easy to search for case-insensitive patterns on dashboards, or in ad-hoc queries.

    • The new query function crypto:md5() is introduced. This function computes the MD5 hash of a given array of fields.

    • Support for decimal values as exponent and divisor is now added in math:pow() and math:mod() functions respectively.

    • The memory consumption of the formatTime() function has been decreased.

Fixed in this release

  • UI Changes

    • Time Selector and date picker in the Time Interval panel have been fixed for issues related to daylight savings time.

    • The URL would not be updated when selecting a time interval in the distribution chart on the Search page. This issue is now fixed.

  • Automation and Alerts

    • If polling queries were slow, then Scheduled Searches could fire twice. This issue is now fixed.

    • Filter Alerts installed from a package would show up under General and not under the Package name. This issue has been fixed.

    • Falcon LogScale repository actions have now been fixed for cases where they would ingest data into a repository even though ingest was blocked.

    • With Scheduled Searches installed from a package, if you edited the scheduled search and then updated the package, then you would get two copies of the scheduled search. This issue is now fixed.

    • Changes to uploaded files due to a package update would be kept even though the package update failed and other changes were rolled back. This wrong behavior has been fixed.

  • Dashboards and Widgets

    • Queries on a dashboard have been fixed as they would be invalid if the dashboard filter contained a single-line comment.

    • Widgets description tips on dashboards have been fixed as they would not show or have the same text for multiple widgets.

    • If you chose a page size larger than the number of rows, the page number and page size buttons would disappear. The Table widget now always shows the pagination buttons on the Search page where auto page size is turned off. On the dashboard, where auto page size is turned on, the existing behaviour remains.

  • Log Collector

    • Fleet Overview in Fleet Management hangs and doesn't display any data. This behavior has been fixed.

  • Functions

    • Fixed a bug where join() queries could result in a memory leak from their sub queries not being properly cleaned up.

    • The hash() query function would sometimes compute incorrect hashes when the field was formatted in UTF8. This is now fixed.

    • Fixed an issue that could result in cluster performance degradation using join() under certain circumstances.

    • Field names in the query used to export results to CSV had not been quoted correctly: they have now been fixed.

    • The format() function has been fixed as the US date format modifier resulted in the EU date format instead.

  • Other

    • Fixing a race that can leave a query in a state where it will cause an excessive amount of 404 HTTP requests. This adds unnecessary noise and a bit of extra load to the system.

    • The following repository issues have been fixed:

      • After multiple attemps in quick succession to create a repository with the same name, repositories would become inaccessible.

      • Some repositories could only be created partially and would be left as partially initialized in LogScale Internal Architecture used by LogScale.

Falcon LogScale 1.106.4 LTS (2023-10-28)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.106.4LTS2023-10-28

Cloud

2024-09-30No1.70.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.106.4/server-1.106.4.tar.gz

These notes include entries from the following previous releases: 1.106.2

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.

  • Automation and Alerts

    • In LogScale version 1.112 we will change how standard alerts handle query warnings. Currently, LogScale will only trigger alerts if there are no query warnings. Starting with upcoming 1.112, alerts will trigger despite most query warnings, and the alert status will show a warning instead of an error.

      Up until now, all query warnings have been treated as errors. This means that the alert does not trigger even though it produces results, and the alert is shown with an error in LogScale. Most query warnings mean that not all data was queried. The current behaviour prevents the alert from triggering in cases where it would not have, if all data had been available. For instance, an alert that would trigger if a count of events dropped below a threshold. On the other hand, it makes some alerts not trigger, even though they would still have if all data was available. That means that currently you will almost never get an alert that you should not have gotten, but you will sometime not get an alert that you should have gotten. We plan to revert this.

      When this change happens, we no longer recommend to set the configuration option ALERT_DESPITE_WARNINGS to true, since it treats all query warnings as non-errors, and there are a few query warnings that should make the alert fail.

New features and improvements

  • Installation and Deployment

    • The following adjustments have been made to the launcher script:

      • Removed UnlockDiagnosticVMOptions

      • Raised default heap size to 75% of host memory, up from 50%

      • Move -XX:CompileCommand settings into the mandatory launch options, to prevent accidentally removing them when customizing HUMIO_JVM_PERFORMANCE_OPTS

      • Set -XX:MaxDirectMemorySize to 1/5GB per CPU core as a default.

      • Print a warning if the sum of the heap size and the direct memory setting exceeds the total available memory.

  • UI Changes

    • The Show in context dialog now closes when the Search button in the dialog is clicked.

    • The fields and values in the Fields Panel and in the Event List are now sorted case-insensitively.

  • Automation and Alerts

    • It is now possible to import and export Filter Alerts in Packages from the UI.

    • When creating or updating Filter Alerts using the GraphQL API, it is now possible to refer to actions in Packages using a qualified name of \"packagescope/packagename:actionname\". Actions in packages will no longer be found if using an unqualified name.

    • The UI flow for Alerts has been updated — when you click on New alert you are directly presented with the New alertform.

    • Importing an alert from template or package is done from the new Import from button now located on top of the New alert form.

    • When installing or updating a package with an Alert or Scheduled search referencing an action that is not part of the package, the error is now shown in the UI. Previously, a generic error was shown.

    • Added a status field to some of the logs for Standard Alerts and Filter Alerts as well as Scheduled Searches. The field shows whether the current run of the job resulted in a Success or Failure for the Alert or Scheduled Search.

      For more information, see Monitoring Alert Execution through the humio-activity Repository.

    • When installing a package, all actions referenced by Alerts and Scheduled searches in the package must be contained in the packages. Previously, missing actions were just ignored.

    • It is now possible to create Packages containing Filter Alerts, as well as importing such packages, using the API.

  • GraphQL API

    • The following GraphQL mutations have been changed so that the actions field can either contain IDs or names of actions:

      • createAlert

      • updateAlert

      • createScheduledSearch

      • updateScheduledSearch

  • Configuration

  • Dashboards and Widgets

    • The text color styling option of the Note Widget is now included when importing a dashboard template or exporting it to a yaml file.

    • Increased to 10,000 the maximum amount of entries suggested in the dropdown of a parameter field of type File Parameter.

  • Ingestion

    • The ability to remove fields when parsing data has been enabled for all users.

      For more information, see Removing Fields.

    • Audit logs for Ingest Tokens now include the ingest token name.

  • Log Collector

    • You can now toggle columns on the instance table, hereby specifying which information should be shown.

    • In Fleet Management, it is now possible to discard the draft of a configuration and rollback to the published version.

      For more information, see Edit a Remote Configuration.

  • Functions

    • The rename() function has been enhanced: it is now possible to rename multiple fields using an array in its field argument. This is backwards compatible with giving separate field and as arguments.

    • The new query function wildcard() is introduced. This function makes it easy to search for case-insensitive patterns on dashboards, or in ad-hoc queries.

    • The new query function crypto:md5() is introduced. This function computes the MD5 hash of a given array of fields.

    • Support for decimal values as exponent and divisor is now added in math:pow() and math:mod() functions respectively.

    • The memory consumption of the formatTime() function has been decreased.

Fixed in this release

  • UI Changes

    • Time Selector and date picker in the Time Interval panel have been fixed for issues related to daylight savings time.

    • The URL would not be updated when selecting a time interval in the distribution chart on the Search page. This issue is now fixed.

  • Automation and Alerts

    • If polling queries were slow, then Scheduled Searches could fire twice. This issue is now fixed.

    • Filter Alerts installed from a package would show up under General and not under the Package name. This issue has been fixed.

    • Falcon LogScale repository actions have now been fixed for cases where they would ingest data into a repository even though ingest was blocked.

    • With Scheduled Searches installed from a package, if you edited the scheduled search and then updated the package, then you would get two copies of the scheduled search. This issue is now fixed.

    • Changes to uploaded files due to a package update would be kept even though the package update failed and other changes were rolled back. This wrong behavior has been fixed.

  • Dashboards and Widgets

    • Queries on a dashboard have been fixed as they would be invalid if the dashboard filter contained a single-line comment.

    • Widgets description tips on dashboards have been fixed as they would not show or have the same text for multiple widgets.

    • If you chose a page size larger than the number of rows, the page number and page size buttons would disappear. The Table widget now always shows the pagination buttons on the Search page where auto page size is turned off. On the dashboard, where auto page size is turned on, the existing behaviour remains.

  • Log Collector

    • Fleet Overview in Fleet Management hangs and doesn't display any data. This behavior has been fixed.

  • Functions

    • Fixed a bug where join() queries could result in a memory leak from their sub queries not being properly cleaned up.

    • The hash() query function would sometimes compute incorrect hashes when the field was formatted in UTF8. This is now fixed.

    • Fixed an issue that could result in cluster performance degradation using join() under certain circumstances.

    • Field names in the query used to export results to CSV had not been quoted correctly: they have now been fixed.

    • The format() function has been fixed as the US date format modifier resulted in the EU date format instead.

  • Other

    • The following repository issues have been fixed:

      • After multiple attemps in quick succession to create a repository with the same name, repositories would become inaccessible.

      • Some repositories could only be created partially and would be left as partially initialized in LogScale Internal Architecture used by LogScale.

Falcon LogScale 1.106.3 Not Released (2023-10-28)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.106.3Not Released2023-10-28

Internal Only

2024-10-31No1.70.0No

Available for download two days after release.

Not released.

Falcon LogScale 1.106.2 LTS (2023-09-27)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.106.2LTS2023-09-27

Cloud

2024-09-30No1.70.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.106.2/server-1.106.2.tar.gz

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.

  • Automation and Alerts

    • In LogScale version 1.112 we will change how standard alerts handle query warnings. Currently, LogScale will only trigger alerts if there are no query warnings. Starting with upcoming 1.112, alerts will trigger despite most query warnings, and the alert status will show a warning instead of an error.

      Up until now, all query warnings have been treated as errors. This means that the alert does not trigger even though it produces results, and the alert is shown with an error in LogScale. Most query warnings mean that not all data was queried. The current behaviour prevents the alert from triggering in cases where it would not have, if all data had been available. For instance, an alert that would trigger if a count of events dropped below a threshold. On the other hand, it makes some alerts not trigger, even though they would still have if all data was available. That means that currently you will almost never get an alert that you should not have gotten, but you will sometime not get an alert that you should have gotten. We plan to revert this.

      When this change happens, we no longer recommend to set the configuration option ALERT_DESPITE_WARNINGS to true, since it treats all query warnings as non-errors, and there are a few query warnings that should make the alert fail.

New features and improvements

  • Installation and Deployment

    • The following adjustments have been made to the launcher script:

      • Removed UnlockDiagnosticVMOptions

      • Raised default heap size to 75% of host memory, up from 50%

      • Move -XX:CompileCommand settings into the mandatory launch options, to prevent accidentally removing them when customizing HUMIO_JVM_PERFORMANCE_OPTS

      • Set -XX:MaxDirectMemorySize to 1/5GB per CPU core as a default.

      • Print a warning if the sum of the heap size and the direct memory setting exceeds the total available memory.

  • UI Changes

    • The Show in context dialog now closes when the Search button in the dialog is clicked.

    • The fields and values in the Fields Panel and in the Event List are now sorted case-insensitively.

  • Automation and Alerts

    • It is now possible to import and export Filter Alerts in Packages from the UI.

    • When creating or updating Filter Alerts using the GraphQL API, it is now possible to refer to actions in Packages using a qualified name of \"packagescope/packagename:actionname\". Actions in packages will no longer be found if using an unqualified name.

    • The UI flow for Alerts has been updated — when you click on New alert you are directly presented with the New alertform.

    • Importing an alert from template or package is done from the new Import from button now located on top of the New alert form.

    • When installing or updating a package with an Alert or Scheduled search referencing an action that is not part of the package, the error is now shown in the UI. Previously, a generic error was shown.

    • Added a status field to some of the logs for Standard Alerts and Filter Alerts as well as Scheduled Searches. The field shows whether the current run of the job resulted in a Success or Failure for the Alert or Scheduled Search.

      For more information, see Monitoring Alert Execution through the humio-activity Repository.

    • When installing a package, all actions referenced by Alerts and Scheduled searches in the package must be contained in the packages. Previously, missing actions were just ignored.

    • It is now possible to create Packages containing Filter Alerts, as well as importing such packages, using the API.

  • GraphQL API

    • The following GraphQL mutations have been changed so that the actions field can either contain IDs or names of actions:

      • createAlert

      • updateAlert

      • createScheduledSearch

      • updateScheduledSearch

  • Configuration

  • Dashboards and Widgets

    • The text color styling option of the Note Widget is now included when importing a dashboard template or exporting it to a yaml file.

    • Increased to 10,000 the maximum amount of entries suggested in the dropdown of a parameter field of type File Parameter.

  • Ingestion

    • The ability to remove fields when parsing data has been enabled for all users.

      For more information, see Removing Fields.

    • Audit logs for Ingest Tokens now include the ingest token name.

  • Log Collector

    • You can now toggle columns on the instance table, hereby specifying which information should be shown.

    • In Fleet Management, it is now possible to discard the draft of a configuration and rollback to the published version.

      For more information, see Edit a Remote Configuration.

  • Functions

    • The rename() function has been enhanced: it is now possible to rename multiple fields using an array in its field argument. This is backwards compatible with giving separate field and as arguments.

    • The new query function wildcard() is introduced. This function makes it easy to search for case-insensitive patterns on dashboards, or in ad-hoc queries.

    • The new query function crypto:md5() is introduced. This function computes the MD5 hash of a given array of fields.

    • Support for decimal values as exponent and divisor is now added in math:pow() and math:mod() functions respectively.

    • The memory consumption of the formatTime() function has been decreased.

Fixed in this release

  • UI Changes

    • The URL would not be updated when selecting a time interval in the distribution chart on the Search page. This issue is now fixed.

  • Automation and Alerts

    • If polling queries were slow, then Scheduled Searches could fire twice. This issue is now fixed.

    • Filter Alerts installed from a package would show up under General and not under the Package name. This issue has been fixed.

    • Falcon LogScale repository actions have now been fixed for cases where they would ingest data into a repository even though ingest was blocked.

    • With Scheduled Searches installed from a package, if you edited the scheduled search and then updated the package, then you would get two copies of the scheduled search. This issue is now fixed.

    • Changes to uploaded files due to a package update would be kept even though the package update failed and other changes were rolled back. This wrong behavior has been fixed.

  • Dashboards and Widgets

    • Queries on a dashboard have been fixed as they would be invalid if the dashboard filter contained a single-line comment.

    • Widgets description tips on dashboards have been fixed as they would not show or have the same text for multiple widgets.

    • If you chose a page size larger than the number of rows, the page number and page size buttons would disappear. The Table widget now always shows the pagination buttons on the Search page where auto page size is turned off. On the dashboard, where auto page size is turned on, the existing behaviour remains.

  • Log Collector

    • Fleet Overview in Fleet Management hangs and doesn't display any data. This behavior has been fixed.

  • Functions

    • Fixed a bug where join() queries could result in a memory leak from their sub queries not being properly cleaned up.

    • The hash() query function would sometimes compute incorrect hashes when the field was formatted in UTF8. This is now fixed.

    • Fixed an issue that could result in cluster performance degradation using join() under certain circumstances.

    • Field names in the query used to export results to CSV had not been quoted correctly: they have now been fixed.

    • The format() function has been fixed as the US date format modifier resulted in the EU date format instead.

  • Other

    • The following repository issues have been fixed:

      • After multiple attemps in quick succession to create a repository with the same name, repositories would become inaccessible.

      • Some repositories could only be created partially and would be left as partially initialized in LogScale Internal Architecture used by LogScale.

Falcon LogScale 1.106.1 GA (2023-09-18)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.106.1GA2023-09-18

Cloud

2024-09-30No1.70.0No

Available for download two days after release.

Bug fixes and updates.

New features and improvements

  • Installation and Deployment

    • The following adjustments have been made to the launcher script:

      • Removed UnlockDiagnosticVMOptions

      • Raised default heap size to 75% of host memory, up from 50%

      • Move -XX:CompileCommand settings into the mandatory launch options, to prevent accidentally removing them when customizing HUMIO_JVM_PERFORMANCE_OPTS

      • Set -XX:MaxDirectMemorySize to 1/5GB per CPU core as a default.

      • Print a warning if the sum of the heap size and the direct memory setting exceeds the total available memory.

  • Configuration

Fixed in this release

  • Functions

    • Fixed a bug where join() queries could result in a memory leak from their sub queries not being properly cleaned up.

Falcon LogScale 1.106.0 GA (2023-09-05)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.106.0GA2023-09-05

Cloud

2024-09-30No1.70.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.

  • Automation and Alerts

    • In LogScale version 1.112 we will change how standard alerts handle query warnings. Currently, LogScale will only trigger alerts if there are no query warnings. Starting with upcoming 1.112, alerts will trigger despite most query warnings, and the alert status will show a warning instead of an error.

      Up until now, all query warnings have been treated as errors. This means that the alert does not trigger even though it produces results, and the alert is shown with an error in LogScale. Most query warnings mean that not all data was queried. The current behaviour prevents the alert from triggering in cases where it would not have, if all data had been available. For instance, an alert that would trigger if a count of events dropped below a threshold. On the other hand, it makes some alerts not trigger, even though they would still have if all data was available. That means that currently you will almost never get an alert that you should not have gotten, but you will sometime not get an alert that you should have gotten. We plan to revert this.

      When this change happens, we no longer recommend to set the configuration option ALERT_DESPITE_WARNINGS to true, since it treats all query warnings as non-errors, and there are a few query warnings that should make the alert fail.

New features and improvements

  • Automation and Alerts

    • When installing or updating a package with an Alert or Scheduled search referencing an action that is not part of the package, the error is now shown in the UI. Previously, a generic error was shown.

  • Dashboards and Widgets

    • The text color styling option of the Note Widget is now included when importing a dashboard template or exporting it to a yaml file.

    • Increased to 10,000 the maximum amount of entries suggested in the dropdown of a parameter field of type File Parameter.

  • Log Collector

    • You can now toggle columns on the instance table, hereby specifying which information should be shown.

  • Functions

    • The rename() function has been enhanced: it is now possible to rename multiple fields using an array in its field argument. This is backwards compatible with giving separate field and as arguments.

Fixed in this release

  • Dashboards and Widgets

    • Queries on a dashboard have been fixed as they would be invalid if the dashboard filter contained a single-line comment.

    • Widgets description tips on dashboards have been fixed as they would not show or have the same text for multiple widgets.

Falcon LogScale 1.105.0 GA (2023-08-29)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.105.0GA2023-08-29

Cloud

2024-09-30No1.70.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.

Fixed in this release

  • Other

    • Keyboard navigation did not work in the jump panel.

Falcon LogScale 1.104.0 GA (2023-08-22)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.104.0GA2023-08-22

Cloud

2024-09-30No1.70.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.

New features and improvements

  • Log Collector

    • In Fleet Management, it is now possible to discard the draft of a configuration and rollback to the published version.

      For more information, see Edit a Remote Configuration.

  • Functions

    • The new query function crypto:md5() is introduced. This function computes the MD5 hash of a given array of fields.

    • Support for decimal values as exponent and divisor is now added in math:pow() and math:mod() functions respectively.

Fixed in this release

  • Automation and Alerts

    • If polling queries were slow, then Scheduled Searches could fire twice. This issue is now fixed.

  • Dashboards and Widgets

    • If you chose a page size larger than the number of rows, the page number and page size buttons would disappear. The Table widget now always shows the pagination buttons on the Search page where auto page size is turned off. On the dashboard, where auto page size is turned on, the existing behaviour remains.

  • Functions

    • Field names in the query used to export results to CSV had not been quoted correctly: they have now been fixed.

Falcon LogScale 1.103.0 GA (2023-08-15)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.103.0GA2023-08-15

Cloud

2024-09-30No1.70.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.

New features and improvements

  • Automation and Alerts

    • It is now possible to import and export Filter Alerts in Packages from the UI.

    • When installing a package, all actions referenced by Alerts and Scheduled searches in the package must be contained in the packages. Previously, missing actions were just ignored.

  • Ingestion

    • The ability to remove fields when parsing data has been enabled for all users.

      For more information, see Removing Fields.

Fixed in this release

  • Automation and Alerts

    • Filter Alerts installed from a package would show up under General and not under the Package name. This issue has been fixed.

    • Changes to uploaded files due to a package update would be kept even though the package update failed and other changes were rolled back. This wrong behavior has been fixed.

  • Log Collector

    • Fleet Overview in Fleet Management hangs and doesn't display any data. This behavior has been fixed.

  • Functions

    • The hash() query function would sometimes compute incorrect hashes when the field was formatted in UTF8. This is now fixed.

Falcon LogScale 1.102.0 GA (2023-08-08)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.102.0GA2023-08-08

Cloud

2024-09-30No1.44.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.

New features and improvements

  • UI Changes

    • The Show in context dialog now closes when the Search button in the dialog is clicked.

    • The fields and values in the Fields Panel and in the Event List are now sorted case-insensitively.

  • Automation and Alerts

    • When creating or updating Filter Alerts using the GraphQL API, it is now possible to refer to actions in Packages using a qualified name of \"packagescope/packagename:actionname\". Actions in packages will no longer be found if using an unqualified name.

    • The UI flow for Alerts has been updated — when you click on New alert you are directly presented with the New alertform.

    • Importing an alert from template or package is done from the new Import from button now located on top of the New alert form.

    • Added a status field to some of the logs for Standard Alerts and Filter Alerts as well as Scheduled Searches. The field shows whether the current run of the job resulted in a Success or Failure for the Alert or Scheduled Search.

      For more information, see Monitoring Alert Execution through the humio-activity Repository.

    • It is now possible to create Packages containing Filter Alerts, as well as importing such packages, using the API.

  • GraphQL API

    • The following GraphQL mutations have been changed so that the actions field can either contain IDs or names of actions:

      • createAlert

      • updateAlert

      • createScheduledSearch

      • updateScheduledSearch

  • Ingestion

    • Audit logs for Ingest Tokens now include the ingest token name.

  • Functions

    • The new query function wildcard() is introduced. This function makes it easy to search for case-insensitive patterns on dashboards, or in ad-hoc queries.

Fixed in this release

  • UI Changes

    • The URL would not be updated when selecting a time interval in the distribution chart on the Search page. This issue is now fixed.

  • Automation and Alerts

    • With Scheduled Searches installed from a package, if you edited the scheduled search and then updated the package, then you would get two copies of the scheduled search. This issue is now fixed.

  • Functions

    • Fixed an issue that could result in cluster performance degradation using join() under certain circumstances.

Falcon LogScale 1.101.1 GA (2023-10-28)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.101.1GA2023-10-28

Cloud

2024-09-30No1.44.0No

Available for download two days after release.

Bug fixes and updates.

Fixed in this release

  • UI Changes

    • Time Selector and date picker in the Time Interval panel have been fixed for issues related to daylight savings time.

Falcon LogScale 1.101.0 GA (2023-08-01)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.101.0GA2023-08-01

Cloud

2024-09-30No1.44.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.

New features and improvements

  • Functions

    • The memory consumption of the formatTime() function has been decreased.

Fixed in this release

  • Automation and Alerts

    • Falcon LogScale repository actions have now been fixed for cases where they would ingest data into a repository even though ingest was blocked.

  • Functions

    • The format() function has been fixed as the US date format modifier resulted in the EU date format instead.

  • Other

    • The following repository issues have been fixed:

      • After multiple attemps in quick succession to create a repository with the same name, repositories would become inaccessible.

      • Some repositories could only be created partially and would be left as partially initialized in LogScale Internal Architecture used by LogScale.

Falcon LogScale 1.100.3 LTS (2024-01-22)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.100.3LTS2024-01-22

Cloud

2024-08-31No1.44.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.100.3/server-1.100.3.tar.gz

These notes include entries from the following previous releases: 1.100.0, 1.100.1, 1.100.2

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.

Removed

Items that have been removed as of this release.

GraphQL API

  • The deprecated RegistryPackage datatype has been deleted, along with the deprecated mutations and fields using it:

    • installPackageFromRegistry mutation

    • updatePackageFromRegistry mutation

    • package in the Searchdomain datatype

Upgrades

Changes that may occur or be required during an upgrade.

  • Security

  • Installation and Deployment

    • Permit running LogScale on Java 20. Docker containers have been upgraded to be based on Java 20.

  • Other

    • The Kafka client has been upgraded to 3.4.1. The Kafka broker has been upgraded to 3.4.1 in the Kafka container.

New features and improvements

  • Security

    • All view permission tokens created from now on will not be able to run queries based on the user who created it (legacy behavior due to user requirement for queries). They will however be able to run queries on behalf of the organization given the right permissions.

      Existing view permission tokens and the resources (scheduled searches, alerts, etc.) are unaffected by this change. For any view permission tokens created after this change, the scheduled searches, alerts, etc. created using these tokens, will run based on the organization instead of the user who created the token.

      This addresses the issue where, for example, alerts created using a view permission token would fail to run if the user who created the token was removed from the organization or if the permissions needed to run the alert was removed from the user. With the new behaviour the alert will continue working even though the user is removed or looses the required permissions to run the alert.

    • In the unlikely event where an external actor hits the audit log without an IP set, we will now log null instead of defaulting to the local IP.

    • Migration from the legacy Organization Shared Dashboard IP filter to the Dashboard Security Policies for sharing dashboards will be done by Creating an IP Filter corresponding to the old filter. If the migration can be performed, this IP Filter will be set on all shared dashboards and set as the Shared Dashboard IP filter Security Policy for the organization. If migration cannot be done, a notification will be displayed to the organization owner explaining how to complete the migration manually. Migration cannot be done when there is a shared Dashboard that has an IP filter other than the legacy Organization Shared Dashboard IP filter.

    • Introducing organization query ownership, permission tokens and organization level security policies features.

      For more information, see Organization Owned Queries, Repository & View Permissions, Security Policies.

  • UI Changes

    • Organization and system level permissions can now be handled through the UI.

    • When duplicating an alert, you are now redirected straight to the New alert page.

      For more information, see Reusing an Alert.

    • Filter alerts now have an updated In preview label which no longer behaves like a button but shows a message when hovering over.

  • Automation and Alerts

    • More attributes have been added to Filter alerts:

      • Filter alerts will now be able to catch up with up to 24 hours of delay (ingest delays + delays in actions).

      • Filter alerts will now trigger on events that are unavailable for up to 10 minutes due to query warnings.

      For more information, see Filter Alerts.

    • A new Enable/Disable option has been added for Alerts and Scheduled Searches.

      For more information, see Managing Alerts.

    • Improvements have been made in the UI:

      • When Creating an Alert from a Query, the alert type — Standard or Filter — is auto-selected based on query detection.

      • Added a trigger limit field in the Filter Alerts form.

      • Actions are now selected in Alerts and Scheduled Searches forms using a ComboBox component.

      • Changed the behaviour of the + button for Actions selection in the Alerts and Scheduled Searches forms; it will now take you to the form where you create a new action instead of adding an action to that entity.

  • GraphQL API

    • For the updateMaxAutoShardCount and blockIngest GraphQL mutations, it is no longer required to be root, instead the caller must have the ManageCluster permission.

    • Added limits for GraphQL queries on the total number of selected fields and fragments. Defaults are 1000 for authenticated and 150 for unauthenticated users.

      Cluster administrators can adjust these limits with the GraphQLSelectionSizeLimit and UnauthenticatedGraphQLSelectionSizeLimit dynamic configurations.

    • The userId input field on the updateDashboardToken mutation is now optional and deprecated in favor of the queryOwnershipType field. If userId is set to anything else than the calling user ID, an exception will be thrown.

    • A GraphQL API has been added to read the current tag groupings on a repository.

      For more information, see repository() .

    • QueryOnlyAccessTokens GraphQL query field previously used for a prototype has now been removed.

  • API

  • Configuration

  • Dashboards and Widgets

    • When clicking Edit in search view on a dashboard widget, the query will now use the live setting of the dashboard. Also, parameter values are carried over.

      For more information, see Manage Widgets.

  • Log Collector

  • Functions

    • Parameter ignoreCase has been added to the in() function, to allow for case-insensitive searching. Default is to case sensitively search for the provided values.

    • Changed the approximation algorithm used for counting distinct values in count(myField, distinct=true) and fieldstats(). Any query using one of the aforementioned functions may report a different number, which in most cases will be more accurate than previous estimates.

  • Other

    • License keys using the format applied before 2021 are no longer supported. Obsolete license formats start with the string eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCJ9. If your license key is obsolete, before you upgrade LogScale contact Support to request an equivalent license key that has the new format. All versions of LogScale since 2020 support the new license key format.

      For more information, see License Installation.

    • Tag groupings page is now available under the repository Settings tab to see the tag groupings which are currently in use on a repository.

Fixed in this release

  • Security

    • Hidden validation issues that would prevent from saving changes to Security Policies configuration have now been fixed.

  • UI Changes

    • Time Selector and date picker in the Time Interval panel have been fixed for issues related to daylight savings time.

    • Fixed an issue where query parameters would be extracted from comments in the query.

    • Fixed an error that was thrown when attempting to export fields to CSV containing spaces.

    • Fixed the default query prefixes which would override exceptions to default role bindings if no query prefix is set in the exceptions. The default query prefix set in the default role will now only impact views that are not defined as an exception to the default rule.

  • Automation and Alerts

    • Filter alerts with a query ending with a comment would not run. This issue has now been fixed.

  • GraphQL API

    • The GraphQL query used by the front page could not return all views and repositories a user had access to, because of an issue with the default roles on groups. This issue has now been fixed.

  • Configuration

    • Wrong behaviour in the StaticQueryFractionOfCores dynamic configuration. The intent of this configuration is to limit queries from one organization (user on single-organization clusters) to run on a certain percentage of mapper threads at most, effectively throttling queries to prevent one organization from consuming all capacity. Throttled queries from one organization could still block queries from other organizations and prevent them from running, leaving mapper threads idle: this behaviour has now been fixed.

  • Dashboards and Widgets

    • When Using Saved Queries in Interactions, the interaction would not be kept if the saved query was created from template with the + Create from package button. This issue is now fixed.

    • Description tips that were partly hidden in Table widgets are now correctly visualized in dashboards.

    • Fixed the parameter form which could not be opened when asterisks were used as quoted identifiers in the query.

    • On charts, the legend tooltip was sometimes hidden towards the bottom of the chart. It has now been fixed to stay within the chart boundaries.

    • The rendering of JSON in the Event List widget is now faster and consumes less memory.

    • In Dashboard Link, the targeted dashboard could not display correctly if the dashboard was renamed. The issue has been fixed by using the dashboard ID instead of the name as reference.

    • When using the sort() function with the Bar Chart widget, it would only stay sorted for a while. The issue has been fixed and it now remains sorted in the same order as the underlying data.

  • Ingestion

    • A 500 status code was issued when ingesting to /api/v1/ingest/json with no assigned parser. It now ingests the rawstring.

  • Functions

    • Fixed an issue where syntax coloring and code completion would stop working in certain cases (using multiple saved queries, or aggregate function in case).

    • Fixed bucket() and timeChart() functions as they could lead to partially missing results when used in combination with window().

  • Other

    • BucketStorageUploadLatencyJob could incorrectly report that LogScale was falling behind on bucket uploads. This issue has been fixed.

    • Fixing a race that can leave a query in a state where it will cause an excessive amount of 404 HTTP requests. This adds unnecessary noise and a bit of extra load to the system.

  • Packages

    • Upgrading a Package could result in a conflict for unchanged items when those items had fields beginning or ending with spaces. This issue has now been fixed.

Falcon LogScale 1.100.2 LTS (2023-11-15)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.100.2LTS2023-11-15

Cloud

2024-08-31No1.44.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.100.2/server-1.100.2.tar.gz

These notes include entries from the following previous releases: 1.100.0, 1.100.1

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.

Removed

Items that have been removed as of this release.

GraphQL API

  • The deprecated RegistryPackage datatype has been deleted, along with the deprecated mutations and fields using it:

    • installPackageFromRegistry mutation

    • updatePackageFromRegistry mutation

    • package in the Searchdomain datatype

Upgrades

Changes that may occur or be required during an upgrade.

  • Security

  • Installation and Deployment

    • Permit running LogScale on Java 20. Docker containers have been upgraded to be based on Java 20.

  • Other

    • The Kafka client has been upgraded to 3.4.1. The Kafka broker has been upgraded to 3.4.1 in the Kafka container.

New features and improvements

  • Security

    • All view permission tokens created from now on will not be able to run queries based on the user who created it (legacy behavior due to user requirement for queries). They will however be able to run queries on behalf of the organization given the right permissions.

      Existing view permission tokens and the resources (scheduled searches, alerts, etc.) are unaffected by this change. For any view permission tokens created after this change, the scheduled searches, alerts, etc. created using these tokens, will run based on the organization instead of the user who created the token.

      This addresses the issue where, for example, alerts created using a view permission token would fail to run if the user who created the token was removed from the organization or if the permissions needed to run the alert was removed from the user. With the new behaviour the alert will continue working even though the user is removed or looses the required permissions to run the alert.

    • In the unlikely event where an external actor hits the audit log without an IP set, we will now log null instead of defaulting to the local IP.

    • Migration from the legacy Organization Shared Dashboard IP filter to the Dashboard Security Policies for sharing dashboards will be done by Creating an IP Filter corresponding to the old filter. If the migration can be performed, this IP Filter will be set on all shared dashboards and set as the Shared Dashboard IP filter Security Policy for the organization. If migration cannot be done, a notification will be displayed to the organization owner explaining how to complete the migration manually. Migration cannot be done when there is a shared Dashboard that has an IP filter other than the legacy Organization Shared Dashboard IP filter.

    • Introducing organization query ownership, permission tokens and organization level security policies features.

      For more information, see Organization Owned Queries, Repository & View Permissions, Security Policies.

  • UI Changes

    • Organization and system level permissions can now be handled through the UI.

    • When duplicating an alert, you are now redirected straight to the New alert page.

      For more information, see Reusing an Alert.

    • Filter alerts now have an updated In preview label which no longer behaves like a button but shows a message when hovering over.

  • Automation and Alerts

    • More attributes have been added to Filter alerts:

      • Filter alerts will now be able to catch up with up to 24 hours of delay (ingest delays + delays in actions).

      • Filter alerts will now trigger on events that are unavailable for up to 10 minutes due to query warnings.

      For more information, see Filter Alerts.

    • A new Enable/Disable option has been added for Alerts and Scheduled Searches.

      For more information, see Managing Alerts.

    • Improvements have been made in the UI:

      • When Creating an Alert from a Query, the alert type — Standard or Filter — is auto-selected based on query detection.

      • Added a trigger limit field in the Filter Alerts form.

      • Actions are now selected in Alerts and Scheduled Searches forms using a ComboBox component.

      • Changed the behaviour of the + button for Actions selection in the Alerts and Scheduled Searches forms; it will now take you to the form where you create a new action instead of adding an action to that entity.

  • GraphQL API

    • For the updateMaxAutoShardCount and blockIngest GraphQL mutations, it is no longer required to be root, instead the caller must have the ManageCluster permission.

    • The userId input field on the updateDashboardToken mutation is now optional and deprecated in favor of the queryOwnershipType field. If userId is set to anything else than the calling user ID, an exception will be thrown.

    • A GraphQL API has been added to read the current tag groupings on a repository.

      For more information, see repository() .

    • QueryOnlyAccessTokens GraphQL query field previously used for a prototype has now been removed.

  • API

  • Configuration

  • Dashboards and Widgets

    • When clicking Edit in search view on a dashboard widget, the query will now use the live setting of the dashboard. Also, parameter values are carried over.

      For more information, see Manage Widgets.

  • Log Collector

  • Functions

    • Parameter ignoreCase has been added to the in() function, to allow for case-insensitive searching. Default is to case sensitively search for the provided values.

    • Changed the approximation algorithm used for counting distinct values in count(myField, distinct=true) and fieldstats(). Any query using one of the aforementioned functions may report a different number, which in most cases will be more accurate than previous estimates.

  • Other

    • License keys using the format applied before 2021 are no longer supported. Obsolete license formats start with the string eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCJ9. If your license key is obsolete, before you upgrade LogScale contact Support to request an equivalent license key that has the new format. All versions of LogScale since 2020 support the new license key format.

      For more information, see License Installation.

    • Tag groupings page is now available under the repository Settings tab to see the tag groupings which are currently in use on a repository.

Fixed in this release

  • Security

    • Hidden validation issues that would prevent from saving changes to Security Policies configuration have now been fixed.

  • UI Changes

    • Time Selector and date picker in the Time Interval panel have been fixed for issues related to daylight savings time.

    • Fixed an issue where query parameters would be extracted from comments in the query.

    • Fixed an error that was thrown when attempting to export fields to CSV containing spaces.

    • Fixed the default query prefixes which would override exceptions to default role bindings if no query prefix is set in the exceptions. The default query prefix set in the default role will now only impact views that are not defined as an exception to the default rule.

  • Automation and Alerts

    • Filter alerts with a query ending with a comment would not run. This issue has now been fixed.

  • GraphQL API

    • The GraphQL query used by the front page could not return all views and repositories a user had access to, because of an issue with the default roles on groups. This issue has now been fixed.

  • Configuration

    • Wrong behaviour in the StaticQueryFractionOfCores dynamic configuration. The intent of this configuration is to limit queries from one organization (user on single-organization clusters) to run on a certain percentage of mapper threads at most, effectively throttling queries to prevent one organization from consuming all capacity. Throttled queries from one organization could still block queries from other organizations and prevent them from running, leaving mapper threads idle: this behaviour has now been fixed.

  • Dashboards and Widgets

    • When Using Saved Queries in Interactions, the interaction would not be kept if the saved query was created from template with the + Create from package button. This issue is now fixed.

    • Description tips that were partly hidden in Table widgets are now correctly visualized in dashboards.

    • Fixed the parameter form which could not be opened when asterisks were used as quoted identifiers in the query.

    • On charts, the legend tooltip was sometimes hidden towards the bottom of the chart. It has now been fixed to stay within the chart boundaries.

    • The rendering of JSON in the Event List widget is now faster and consumes less memory.

    • In Dashboard Link, the targeted dashboard could not display correctly if the dashboard was renamed. The issue has been fixed by using the dashboard ID instead of the name as reference.

    • When using the sort() function with the Bar Chart widget, it would only stay sorted for a while. The issue has been fixed and it now remains sorted in the same order as the underlying data.

  • Ingestion

    • A 500 status code was issued when ingesting to /api/v1/ingest/json with no assigned parser. It now ingests the rawstring.

  • Functions

    • Fixed an issue where syntax coloring and code completion would stop working in certain cases (using multiple saved queries, or aggregate function in case).

    • Fixed bucket() and timeChart() functions as they could lead to partially missing results when used in combination with window().

  • Other

    • BucketStorageUploadLatencyJob could incorrectly report that LogScale was falling behind on bucket uploads. This issue has been fixed.

    • Fixing a race that can leave a query in a state where it will cause an excessive amount of 404 HTTP requests. This adds unnecessary noise and a bit of extra load to the system.

  • Packages

    • Upgrading a Package could result in a conflict for unchanged items when those items had fields beginning or ending with spaces. This issue has now been fixed.

Falcon LogScale 1.100.1 LTS (2023-10-28)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.100.1LTS2023-10-28

Cloud

2024-08-31No1.44.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.100.1/server-1.100.1.tar.gz

These notes include entries from the following previous releases: 1.100.0

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.

Removed

Items that have been removed as of this release.

GraphQL API

  • The deprecated RegistryPackage datatype has been deleted, along with the deprecated mutations and fields using it:

    • installPackageFromRegistry mutation

    • updatePackageFromRegistry mutation

    • package in the Searchdomain datatype

Upgrades

Changes that may occur or be required during an upgrade.

  • Installation and Deployment

    • Permit running LogScale on Java 20. Docker containers have been upgraded to be based on Java 20.

  • Other

    • The Kafka client has been upgraded to 3.4.1. The Kafka broker has been upgraded to 3.4.1 in the Kafka container.

New features and improvements

  • Security

    • All view permission tokens created from now on will not be able to run queries based on the user who created it (legacy behavior due to user requirement for queries). They will however be able to run queries on behalf of the organization given the right permissions.

      Existing view permission tokens and the resources (scheduled searches, alerts, etc.) are unaffected by this change. For any view permission tokens created after this change, the scheduled searches, alerts, etc. created using these tokens, will run based on the organization instead of the user who created the token.

      This addresses the issue where, for example, alerts created using a view permission token would fail to run if the user who created the token was removed from the organization or if the permissions needed to run the alert was removed from the user. With the new behaviour the alert will continue working even though the user is removed or looses the required permissions to run the alert.

    • In the unlikely event where an external actor hits the audit log without an IP set, we will now log null instead of defaulting to the local IP.

    • Migration from the legacy Organization Shared Dashboard IP filter to the Dashboard Security Policies for sharing dashboards will be done by Creating an IP Filter corresponding to the old filter. If the migration can be performed, this IP Filter will be set on all shared dashboards and set as the Shared Dashboard IP filter Security Policy for the organization. If migration cannot be done, a notification will be displayed to the organization owner explaining how to complete the migration manually. Migration cannot be done when there is a shared Dashboard that has an IP filter other than the legacy Organization Shared Dashboard IP filter.

    • Introducing organization query ownership, permission tokens and organization level security policies features.

      For more information, see Organization Owned Queries, Repository & View Permissions, Security Policies.

  • UI Changes

    • Organization and system level permissions can now be handled through the UI.

    • When duplicating an alert, you are now redirected straight to the New alert page.

      For more information, see Reusing an Alert.

    • Filter alerts now have an updated In preview label which no longer behaves like a button but shows a message when hovering over.

  • Automation and Alerts

    • More attributes have been added to Filter alerts:

      • Filter alerts will now be able to catch up with up to 24 hours of delay (ingest delays + delays in actions).

      • Filter alerts will now trigger on events that are unavailable for up to 10 minutes due to query warnings.

      For more information, see Filter Alerts.

    • A new Enable/Disable option has been added for Alerts and Scheduled Searches.

      For more information, see Managing Alerts.

    • Improvements have been made in the UI:

      • When Creating an Alert from a Query, the alert type — Standard or Filter — is auto-selected based on query detection.

      • Added a trigger limit field in the Filter Alerts form.

      • Actions are now selected in Alerts and Scheduled Searches forms using a ComboBox component.

      • Changed the behaviour of the + button for Actions selection in the Alerts and Scheduled Searches forms; it will now take you to the form where you create a new action instead of adding an action to that entity.

  • GraphQL API

    • For the updateMaxAutoShardCount and blockIngest GraphQL mutations, it is no longer required to be root, instead the caller must have the ManageCluster permission.

    • The userId input field on the updateDashboardToken mutation is now optional and deprecated in favor of the queryOwnershipType field. If userId is set to anything else than the calling user ID, an exception will be thrown.

    • A GraphQL API has been added to read the current tag groupings on a repository.

      For more information, see repository() .

    • QueryOnlyAccessTokens GraphQL query field previously used for a prototype has now been removed.

  • API

  • Configuration

  • Dashboards and Widgets

    • When clicking Edit in search view on a dashboard widget, the query will now use the live setting of the dashboard. Also, parameter values are carried over.

      For more information, see Manage Widgets.

  • Log Collector

  • Functions

    • Parameter ignoreCase has been added to the in() function, to allow for case-insensitive searching. Default is to case sensitively search for the provided values.

    • Changed the approximation algorithm used for counting distinct values in count(myField, distinct=true) and fieldstats(). Any query using one of the aforementioned functions may report a different number, which in most cases will be more accurate than previous estimates.

  • Other

    • License keys using the format applied before 2021 are no longer supported. Obsolete license formats start with the string eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCJ9. If your license key is obsolete, before you upgrade LogScale contact Support to request an equivalent license key that has the new format. All versions of LogScale since 2020 support the new license key format.

      For more information, see License Installation.

    • Tag groupings page is now available under the repository Settings tab to see the tag groupings which are currently in use on a repository.

Fixed in this release

  • Security

    • Hidden validation issues that would prevent from saving changes to Security Policies configuration have now been fixed.

  • UI Changes

    • Time Selector and date picker in the Time Interval panel have been fixed for issues related to daylight savings time.

    • Fixed an issue where query parameters would be extracted from comments in the query.

    • Fixed an error that was thrown when attempting to export fields to CSV containing spaces.

    • Fixed the default query prefixes which would override exceptions to default role bindings if no query prefix is set in the exceptions. The default query prefix set in the default role will now only impact views that are not defined as an exception to the default rule.

  • Automation and Alerts

    • Filter alerts with a query ending with a comment would not run. This issue has now been fixed.

  • GraphQL API

    • The GraphQL query used by the front page could not return all views and repositories a user had access to, because of an issue with the default roles on groups. This issue has now been fixed.

  • Configuration

    • Wrong behaviour in the StaticQueryFractionOfCores dynamic configuration. The intent of this configuration is to limit queries from one organization (user on single-organization clusters) to run on a certain percentage of mapper threads at most, effectively throttling queries to prevent one organization from consuming all capacity. Throttled queries from one organization could still block queries from other organizations and prevent them from running, leaving mapper threads idle: this behaviour has now been fixed.

  • Dashboards and Widgets

    • When Using Saved Queries in Interactions, the interaction would not be kept if the saved query was created from template with the + Create from package button. This issue is now fixed.

    • Description tips that were partly hidden in Table widgets are now correctly visualized in dashboards.

    • Fixed the parameter form which could not be opened when asterisks were used as quoted identifiers in the query.

    • On charts, the legend tooltip was sometimes hidden towards the bottom of the chart. It has now been fixed to stay within the chart boundaries.

    • The rendering of JSON in the Event List widget is now faster and consumes less memory.

    • In Dashboard Link, the targeted dashboard could not display correctly if the dashboard was renamed. The issue has been fixed by using the dashboard ID instead of the name as reference.

    • When using the sort() function with the Bar Chart widget, it would only stay sorted for a while. The issue has been fixed and it now remains sorted in the same order as the underlying data.

  • Ingestion

    • A 500 status code was issued when ingesting to /api/v1/ingest/json with no assigned parser. It now ingests the rawstring.

  • Functions

    • Fixed an issue where syntax coloring and code completion would stop working in certain cases (using multiple saved queries, or aggregate function in case).

    • Fixed bucket() and timeChart() functions as they could lead to partially missing results when used in combination with window().

  • Other

    • BucketStorageUploadLatencyJob could incorrectly report that LogScale was falling behind on bucket uploads. This issue has been fixed.

  • Packages

    • Upgrading a Package could result in a conflict for unchanged items when those items had fields beginning or ending with spaces. This issue has now been fixed.

Falcon LogScale 1.100.0 LTS (2023-08-16)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.100.0LTS2023-08-16

Cloud

2024-08-31No1.44.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.100.0/server-1.100.0.tar.gz

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.

Removed

Items that have been removed as of this release.

GraphQL API

  • The deprecated RegistryPackage datatype has been deleted, along with the deprecated mutations and fields using it:

    • installPackageFromRegistry mutation

    • updatePackageFromRegistry mutation

    • package in the Searchdomain datatype

Upgrades

Changes that may occur or be required during an upgrade.

  • Installation and Deployment

    • Permit running LogScale on Java 20. Docker containers have been upgraded to be based on Java 20.

  • Other

    • The Kafka client has been upgraded to 3.4.1. The Kafka broker has been upgraded to 3.4.1 in the Kafka container.

New features and improvements

  • Security

    • All view permission tokens created from now on will not be able to run queries based on the user who created it (legacy behavior due to user requirement for queries). They will however be able to run queries on behalf of the organization given the right permissions.

      Existing view permission tokens and the resources (scheduled searches, alerts, etc.) are unaffected by this change. For any view permission tokens created after this change, the scheduled searches, alerts, etc. created using these tokens, will run based on the organization instead of the user who created the token.

      This addresses the issue where, for example, alerts created using a view permission token would fail to run if the user who created the token was removed from the organization or if the permissions needed to run the alert was removed from the user. With the new behaviour the alert will continue working even though the user is removed or looses the required permissions to run the alert.

    • In the unlikely event where an external actor hits the audit log without an IP set, we will now log null instead of defaulting to the local IP.

    • Migration from the legacy Organization Shared Dashboard IP filter to the Dashboard Security Policies for sharing dashboards will be done by Creating an IP Filter corresponding to the old filter. If the migration can be performed, this IP Filter will be set on all shared dashboards and set as the Shared Dashboard IP filter Security Policy for the organization. If migration cannot be done, a notification will be displayed to the organization owner explaining how to complete the migration manually. Migration cannot be done when there is a shared Dashboard that has an IP filter other than the legacy Organization Shared Dashboard IP filter.

    • Introducing organization query ownership, permission tokens and organization level security policies features.

      For more information, see Organization Owned Queries, Repository & View Permissions, Security Policies.

  • UI Changes

    • Organization and system level permissions can now be handled through the UI.

    • When duplicating an alert, you are now redirected straight to the New alert page.

      For more information, see Reusing an Alert.

    • Filter alerts now have an updated In preview label which no longer behaves like a button but shows a message when hovering over.

  • Automation and Alerts

    • More attributes have been added to Filter alerts:

      • Filter alerts will now be able to catch up with up to 24 hours of delay (ingest delays + delays in actions).

      • Filter alerts will now trigger on events that are unavailable for up to 10 minutes due to query warnings.

      For more information, see Filter Alerts.

    • A new Enable/Disable option has been added for Alerts and Scheduled Searches.

      For more information, see Managing Alerts.

    • Improvements have been made in the UI:

      • When Creating an Alert from a Query, the alert type — Standard or Filter — is auto-selected based on query detection.

      • Added a trigger limit field in the Filter Alerts form.

      • Actions are now selected in Alerts and Scheduled Searches forms using a ComboBox component.

      • Changed the behaviour of the + button for Actions selection in the Alerts and Scheduled Searches forms; it will now take you to the form where you create a new action instead of adding an action to that entity.

  • GraphQL API

    • For the updateMaxAutoShardCount and blockIngest GraphQL mutations, it is no longer required to be root, instead the caller must have the ManageCluster permission.

    • The userId input field on the updateDashboardToken mutation is now optional and deprecated in favor of the queryOwnershipType field. If userId is set to anything else than the calling user ID, an exception will be thrown.

    • A GraphQL API has been added to read the current tag groupings on a repository.

      For more information, see repository() .

    • QueryOnlyAccessTokens GraphQL query field previously used for a prototype has now been removed.

  • API

  • Configuration

  • Dashboards and Widgets

    • When clicking Edit in search view on a dashboard widget, the query will now use the live setting of the dashboard. Also, parameter values are carried over.

      For more information, see Manage Widgets.

  • Log Collector

  • Functions

    • Parameter ignoreCase has been added to the in() function, to allow for case-insensitive searching. Default is to case sensitively search for the provided values.

    • Changed the approximation algorithm used for counting distinct values in count(myField, distinct=true) and fieldstats(). Any query using one of the aforementioned functions may report a different number, which in most cases will be more accurate than previous estimates.

  • Other

    • License keys using the format applied before 2021 are no longer supported. Obsolete license formats start with the string eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCJ9. If your license key is obsolete, before you upgrade LogScale contact Support to request an equivalent license key that has the new format. All versions of LogScale since 2020 support the new license key format.

      For more information, see License Installation.

    • Tag groupings page is now available under the repository Settings tab to see the tag groupings which are currently in use on a repository.

Fixed in this release

  • Security

    • Hidden validation issues that would prevent from saving changes to Security Policies configuration have now been fixed.

  • UI Changes

    • Fixed an issue where query parameters would be extracted from comments in the query.

    • Fixed an error that was thrown when attempting to export fields to CSV containing spaces.

    • Fixed the default query prefixes which would override exceptions to default role bindings if no query prefix is set in the exceptions. The default query prefix set in the default role will now only impact views that are not defined as an exception to the default rule.

  • Automation and Alerts

    • Filter alerts with a query ending with a comment would not run. This issue has now been fixed.

  • GraphQL API

    • The GraphQL query used by the front page could not return all views and repositories a user had access to, because of an issue with the default roles on groups. This issue has now been fixed.

  • Configuration

    • Wrong behaviour in the StaticQueryFractionOfCores dynamic configuration. The intent of this configuration is to limit queries from one organization (user on single-organization clusters) to run on a certain percentage of mapper threads at most, effectively throttling queries to prevent one organization from consuming all capacity. Throttled queries from one organization could still block queries from other organizations and prevent them from running, leaving mapper threads idle: this behaviour has now been fixed.

  • Dashboards and Widgets

    • When Using Saved Queries in Interactions, the interaction would not be kept if the saved query was created from template with the + Create from package button. This issue is now fixed.

    • Description tips that were partly hidden in Table widgets are now correctly visualized in dashboards.

    • Fixed the parameter form which could not be opened when asterisks were used as quoted identifiers in the query.

    • On charts, the legend tooltip was sometimes hidden towards the bottom of the chart. It has now been fixed to stay within the chart boundaries.

    • The rendering of JSON in the Event List widget is now faster and consumes less memory.

    • In Dashboard Link, the targeted dashboard could not display correctly if the dashboard was renamed. The issue has been fixed by using the dashboard ID instead of the name as reference.

    • When using the sort() function with the Bar Chart widget, it would only stay sorted for a while. The issue has been fixed and it now remains sorted in the same order as the underlying data.

  • Ingestion

    • A 500 status code was issued when ingesting to /api/v1/ingest/json with no assigned parser. It now ingests the rawstring.

  • Functions

    • Fixed an issue where syntax coloring and code completion would stop working in certain cases (using multiple saved queries, or aggregate function in case).

    • Fixed bucket() and timeChart() functions as they could lead to partially missing results when used in combination with window().

  • Other

    • BucketStorageUploadLatencyJob could incorrectly report that LogScale was falling behind on bucket uploads. This issue has been fixed.

  • Packages

    • Upgrading a Package could result in a conflict for unchanged items when those items had fields beginning or ending with spaces. This issue has now been fixed.

Falcon LogScale 1.99.0 GA (2023-07-18)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.99.0GA2023-07-18

Cloud

2024-08-31No1.44.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.

New features and improvements

  • Log Collector

  • Functions

    • Parameter ignoreCase has been added to the in() function, to allow for case-insensitive searching. Default is to case sensitively search for the provided values.

Fixed in this release

  • GraphQL API

    • The GraphQL query used by the front page could not return all views and repositories a user had access to, because of an issue with the default roles on groups. This issue has now been fixed.

  • Configuration

    • Wrong behaviour in the StaticQueryFractionOfCores dynamic configuration. The intent of this configuration is to limit queries from one organization (user on single-organization clusters) to run on a certain percentage of mapper threads at most, effectively throttling queries to prevent one organization from consuming all capacity. Throttled queries from one organization could still block queries from other organizations and prevent them from running, leaving mapper threads idle: this behaviour has now been fixed.

  • Packages

    • Upgrading a Package could result in a conflict for unchanged items when those items had fields beginning or ending with spaces. This issue has now been fixed.

Falcon LogScale 1.98.0 GA (2023-07-11)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.98.0GA2023-07-11

Cloud

2024-08-31No1.44.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.

New features and improvements

  • Automation and Alerts

    • Improvements have been made in the UI:

      • When Creating an Alert from a Query, the alert type — Standard or Filter — is auto-selected based on query detection.

      • Added a trigger limit field in the Filter Alerts form.

      • Actions are now selected in Alerts and Scheduled Searches forms using a ComboBox component.

      • Changed the behaviour of the + button for Actions selection in the Alerts and Scheduled Searches forms; it will now take you to the form where you create a new action instead of adding an action to that entity.

  • GraphQL API

    • QueryOnlyAccessTokens GraphQL query field previously used for a prototype has now been removed.

  • Configuration

Falcon LogScale 1.97.0 GA (2023-07-04)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.97.0GA2023-07-04

Cloud

2024-08-31No1.44.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.

New features and improvements

  • Security

    • All view permission tokens created from now on will not be able to run queries based on the user who created it (legacy behavior due to user requirement for queries). They will however be able to run queries on behalf of the organization given the right permissions.

      Existing view permission tokens and the resources (scheduled searches, alerts, etc.) are unaffected by this change. For any view permission tokens created after this change, the scheduled searches, alerts, etc. created using these tokens, will run based on the organization instead of the user who created the token.

      This addresses the issue where, for example, alerts created using a view permission token would fail to run if the user who created the token was removed from the organization or if the permissions needed to run the alert was removed from the user. With the new behaviour the alert will continue working even though the user is removed or looses the required permissions to run the alert.

    • Migration from the legacy Organization Shared Dashboard IP filter to the Dashboard Security Policies for sharing dashboards will be done by Creating an IP Filter corresponding to the old filter. If the migration can be performed, this IP Filter will be set on all shared dashboards and set as the Shared Dashboard IP filter Security Policy for the organization. If migration cannot be done, a notification will be displayed to the organization owner explaining how to complete the migration manually. Migration cannot be done when there is a shared Dashboard that has an IP filter other than the legacy Organization Shared Dashboard IP filter.

    • Introducing organization query ownership, permission tokens and organization level security policies features.

      For more information, see Organization Owned Queries, Repository & View Permissions, Security Policies.

  • UI Changes

    • Organization and system level permissions can now be handled through the UI.

  • Automation and Alerts

    • More attributes have been added to Filter alerts:

      • Filter alerts will now be able to catch up with up to 24 hours of delay (ingest delays + delays in actions).

      • Filter alerts will now trigger on events that are unavailable for up to 10 minutes due to query warnings.

      For more information, see Filter Alerts.

    • A new Enable/Disable option has been added for Alerts and Scheduled Searches.

      For more information, see Managing Alerts.

  • GraphQL API

    • A GraphQL API has been added to read the current tag groupings on a repository.

      For more information, see repository() .

  • Configuration

  • Dashboards and Widgets

    • When clicking Edit in search view on a dashboard widget, the query will now use the live setting of the dashboard. Also, parameter values are carried over.

      For more information, see Manage Widgets.

  • Log Collector

  • Other

    • Tag groupings page is now available under the repository Settings tab to see the tag groupings which are currently in use on a repository.

Fixed in this release

  • Automation and Alerts

    • Filter alerts with a query ending with a comment would not run. This issue has now been fixed.

  • Dashboards and Widgets

    • The rendering of JSON in the Event List widget is now faster and consumes less memory.

    • When using the sort() function with the Bar Chart widget, it would only stay sorted for a while. The issue has been fixed and it now remains sorted in the same order as the underlying data.

  • Ingestion

    • A 500 status code was issued when ingesting to /api/v1/ingest/json with no assigned parser. It now ingests the rawstring.

Falcon LogScale 1.96.0 GA (2023-06-27)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.96.0GA2023-06-27

Cloud

2024-08-31No1.44.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.

Upgrades

Changes that may occur or be required during an upgrade.

  • Other

    • The Kafka client has been upgraded to 3.4.1. The Kafka broker has been upgraded to 3.4.1 in the Kafka container.

New features and improvements

  • UI Changes

    • When duplicating an alert, you are now redirected straight to the New alert page.

      For more information, see Reusing an Alert.

    • Filter alerts now have an updated In preview label which no longer behaves like a button but shows a message when hovering over.

  • GraphQL API

    • For the updateMaxAutoShardCount and blockIngest GraphQL mutations, it is no longer required to be root, instead the caller must have the ManageCluster permission.

    • The userId input field on the updateDashboardToken mutation is now optional and deprecated in favor of the queryOwnershipType field. If userId is set to anything else than the calling user ID, an exception will be thrown.

  • API

Fixed in this release

  • Dashboards and Widgets

    • When Using Saved Queries in Interactions, the interaction would not be kept if the saved query was created from template with the + Create from package button. This issue is now fixed.

    • Description tips that were partly hidden in Table widgets are now correctly visualized in dashboards.

    • In Dashboard Link, the targeted dashboard could not display correctly if the dashboard was renamed. The issue has been fixed by using the dashboard ID instead of the name as reference.

  • Functions

    • Fixed an issue where syntax coloring and code completion would stop working in certain cases (using multiple saved queries, or aggregate function in case).

Falcon LogScale 1.95.0 GA (2023-06-20)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.95.0GA2023-06-20

Cloud

2024-08-31No1.44.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.

Removed

Items that have been removed as of this release.

GraphQL API

  • The deprecated RegistryPackage datatype has been deleted, along with the deprecated mutations and fields using it:

    • installPackageFromRegistry mutation

    • updatePackageFromRegistry mutation

    • package in the Searchdomain datatype

Upgrades

Changes that may occur or be required during an upgrade.

  • Installation and Deployment

    • Permit running LogScale on Java 20. Docker containers have been upgraded to be based on Java 20.

Fixed in this release

  • UI Changes

    • Fixed an issue where query parameters would be extracted from comments in the query.

    • Fixed an error that was thrown when attempting to export fields to CSV containing spaces.

    • Fixed the default query prefixes which would override exceptions to default role bindings if no query prefix is set in the exceptions. The default query prefix set in the default role will now only impact views that are not defined as an exception to the default rule.

  • Dashboards and Widgets

    • Fixed the parameter form which could not be opened when asterisks were used as quoted identifiers in the query.

    • On charts, the legend tooltip was sometimes hidden towards the bottom of the chart. It has now been fixed to stay within the chart boundaries.

  • Other

    • BucketStorageUploadLatencyJob could incorrectly report that LogScale was falling behind on bucket uploads. This issue has been fixed.

Falcon LogScale 1.94.2 LTS (2023-11-15)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.94.2LTS2023-11-15

Cloud

2024-07-31No1.44.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.94.2/server-1.94.2.tar.gz

These notes include entries from the following previous releases: 1.94.0, 1.94.1

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.

Removed

Items that have been removed as of this release.

API

  • Degrade and deprecate some REST and GraphQL APIs due to the introduction of AutomaticSegmentDistribution and AutomaticDigesterDistribution. The deprecated elements will be removed in a future release, once the upgrade compatibility with version 1.88.0 is dropped. We expect this to be no earlier than September 2023.

    The following REST endpoints are deprecated, as they no longer have an effect and return meaningless results:

    • api/v1/clusterconfig/segments/prune-replicas

    • api/v1/clusterconfig/segments/distribute-evenly

    • api/v1/clusterconfig/segments/distribute-evenly-reshuffle-all

    • api/v1/clusterconfig/segments/distribute-evenly-to-host

    • api/v1/clusterconfig/segments/distribute-evenly-from-host

    • api/v1/clusterconfig/segments/partitions

    • api/v1/clusterconfig/segments/partitions/setdefaults

    • api/v1/clusterconfig/segments/set-replication-defaults

    • api/v1/clusterconfig/partitions/setdefaults

    • api/v1/clusterconfig/ingestpartitions/distribute-evenly-from-host

    • api/v1/clusterconfig/ingestpartitions/setdefaults

    • api/v1/clusterconfig/ingestpartitions (POST only, GET will continue to work)

    The following GraphQL mutations are deprecated, as they no longer have an effect and return meaningless results:

    • startDataRedistribution

    • updateStoragePartitionScheme

    The IngestPartitionScheme mutation is not deprecated, but as it updates state that is overwritten by automation, we recommend against using it — it exists solely to serve as a debugging tool.

    The following GraphQL fields on the cluster object are deprecated, and return meaningless values:

    • ingestPartitionsWarnings

    • suggestedIngestPartitions

    • storagePartitions

    • storagePartitionsWarnings

    • suggestedStoragePartitions

    • storageDivergence

    • reapply_targetSize

    The following fields in the return value of the api/v1/clusterconfig/segments/segment-stats endpoint are deprecated and degraded to always be O:

    • reapply_targetBytes

    • reapply_targetSegments

    • reapply_inboundBytes

    • reapply_inboundSegments

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Storage

    • Be less aggressive updating the digest partitions when a node goes offline. When a node goes offline/online, creating a well balanced table can require changes to partitions other than those where the changed node appears. This can cause more digest reassignment that we'd like, so we're changing the behavior of the automation. We'll now only generate optimally balanced tables in reaction to nodes being registered or unregistered from the cluster, and in reaction to the digest replication factor changing. The rest of the time, we'll take the previously generated balanced table as a starting point, and do very minimal node replacements in it to ensure partitions are properly replicated to live nodes.

    • It is no longer allowed for nodes to delete bucketed mini-segments involved in queries off local disks before the queries are done. This should help ensure queries do not "miss" querying these files if they are deleted while a query is running.

    • Metadata on segments in memory is now represented in a manner that requires less memory at runtime after booting. The heap required for global snapshot is in the range 3-6 times the size of the disk, for a cluster with many segments. This change reduces the memory requirements for long retention compared to previous versions. Note that for a short time during boot of a node the memory requirement is closer to 10-15 times the size of the snapshot on disk.

  • Configuration

Upgrades

Changes that may occur or be required during an upgrade.

  • Security

New features and improvements

  • UI Changes

    • A new tutorial built on a dedicated demo data view is available for environments that do not have access to legacy tutorial based on a sandbox repository.

    • The DeleteRepositoryOrView data permission is now visible in the UI on Cloud environments.

    • The Time Selector now only allows zooming out to approximately 4,000 years.

    • The ChangeRetention data permission is now enabled on Cloud environments.

    • When reaching the default capped output in table() and sort() query functions, a warning now suggests you can set a new value using the limit parameter.

  • Documentation

    • LogScale Kubernetes Reference Architecture new page has been added with LogScale reference architecture description when deploying LogScale using Kubernetes.

    • Regular Expression Syntax new page has been added with extended details of supported regular expression syntax and differences between the LogScale support and other implementations such as Java and Perl.

  • Automation and Alerts

    • The Alert and Scheduled Search jobs no longer produce logs about specific alerts or scheduled searches in the humio repository. The logs are still sent to the humio-activity repository, which in normal setup is also ingested into the humio repository. So before, the logs would normally be duplicated, now they are not. The only difference between the two types of logs, is that the logs from the humio-activity repository all have loglevel equal to INFO. You can use the severity field instead to distinguish between the severity of the logs.

    • The possibility to mark alerts and scheduled searches as favorites has been removed.

    • Improvements in the layout of Alerts and Scheduled Searches, which now have updated forms.

    • The Actions overview now has quick filters for showing only actions of specific types.

    • The Scheduled Searches overview now shows the status of scheduled searches with a colored dot to make it easy to spot failing scheduled searches.

    • Improvements in the Alerts and Scheduled Searches permissions, which are now renamed to Run on behalf of, and have a more clarifying help text.

    • The Alerts overview now has quick filters for showing only standard alerts or filter alerts. It also shows the status of alerts with a colored dot to make it easy to spot failing alerts.

  • GraphQL API

    • The Usage page has been updated to support queries that are in progress for longer than the GraphQL timeout allows.

    • The semantics of the field SolitarySegmentSize on the ClusterNode datatype has changed from counting bytes that only exist on that node and which have been underreplicated for a while, to counting bytes that only exist on that node.

    • The GraphQL schema for UsageStats has been updated to reflect that queries can be in progress.

    • Mutations enableAlert and disableAlert have been added for enabling and disabling an alert without changing other fields.

  • Configuration

  • Dashboards and Widgets

    • New parsing of Template Expressions has been implemented in the UI for improved performance.

    • When creating or editing interactions you can now visualize any unused parameter bindings, with the option to remove them.

      For more information, see Unused parameters bindings.

    • Improved performance on the Search page, especially when events contain large JSON objects.

      A new limit of 49 series has been set when using the wide format data (one field per series) in the Scatter Chart Widget (the first field is always the x axis). No such limit applies to long format data (series defined by one groupby column).

    • The empty list alias is now available as an input option for parameter bindings, so that Multi-value Parameters can be set explicitly to have the value of an empty list.

      For more information, see Empty list alias.

    • Parameter labels are now used instead of parameter IDs when displaying the list of parameters that a widget / query is waiting on.

  • Ingestion

    • Parser timeouts have been changed to take thread time into account. This should make parsers more resilient to long Garbage Collector stalls.

      For more information, see Parser Timeout.

  • Log Collector

    • Added a new test status for configurations, which allows you to try out a configuration on one or more instances before it's published.

      For more information, see Test a Remote Configuration.

  • Functions

    • Performance improvements when using regex() function or regex syntax.

    • In parseTimestamp() function, special format specifiers like seconds are now recognized independently of capitalization to allow case-insensitive match.

  • Other

    • Reduced the amount of memory used when multiple queries use the match() function with the same arguments. Before, if you ran many queries that used the same file, the contents of the file would be represented multiple times in memory, once for each query. This could put you at risk of exhausting the server's memory if the files were large. With this change the file contents will be shared between the queries and represented only once. This enables the server to run more queries and/or handle larger files.

      For more information, see Lookup Files Operations.

    • When the Kafka broker set changes at runtime, track that set and use as bootstrap servers for Kafka whenever LogScale needs to create a new Kafka client at runtime. This allows replacing all Kafka brokers (incrementally, moving their work to new servers) without restarting LogScale. Note that the set is not persisted across restart of LogScale, so when restarting LogScale, make sure to provide an up to date set of bootstrap servers.

    • The following cluster management features are now enabled:

      • AutomaticJobDistribution

      • AutomaticDigesterDistribution

      • AutomaticSegmentDistribution

      For more information, see Digest Rules.

Fixed in this release

  • UI Changes

    • Turned off the light bulb in the query editor as it was causing technical issues.

    • Fixed an issue where the filter would remain applied in the saved or recent queries when switching tabs in the Queries menu.

    • Time Selector and date picker in the Time Interval panel have been fixed for issues related to daylight savings time.

    • Fixed the order of the timezones in the timezone dropdown on the Search and Dashboards pages.

    • An error for lacking permissions that appeared when updating the organization settings has been fixed. Now, if you have permissions to view the Organization Settings page, you can also update information on it.

  • Automation and Alerts

    • The throttle field would be empty when editing an Alert; this issue has now been fixed.

    • Fixed an issue where clicking the Inspect link in Alert notifications would land on a missing page.

    • Fixed an issue that could cause some rarely occurring errors when running alerts to not show up on the alert.

  • Dashboards and Widgets

    • Labels of FixedList Parameter parameters values have been fixed, so that they default to the value instead of rendering empty string.

    • Fixed an issue where certain widget options would be ignored when importing a dashboard template or installing a package.

    • The following issues have been fixed on dashboards:

      • A dashboard would sometimes be perceived as changed on the server even though it was not.

      • Discard unsaved changes would appear when creating and applying new parameters.

    • Fixed the Manage interactions page where Event List Interactions were not scrollable.

    • Fixed a wrong behaviour on the Interactions overview page when creating a new interaction: if the interaction panel was opened, the repository options would dropdown in it instead of in the Create new interaction dialog.

  • Queries

    • An edge case has been fixed where query workers could fail to include mini-segments if the mini-segments were merged at a bad time, causing queries to be missing the data in those segments.

  • Functions

    • The select() function has been fixed as it wasn't preserving tags.

    • The format() has been fixed as the combination of the hexadecimal modifier combined with grouping would not always work.

    • The rename() function would drop the field, if the field and as arguments were identical; this issue has now been fixed.

    • The regex engine has been fixed for issues impacting nested repeats and giving false negatives, as in expressions such as (x{2}:){3}.

  • Other

    • Some merged segments could temporarily be missing from query results right after an ephemeral node reboot. This issue has been fixed.

    • The following Node-Level Metrics that showed incorrect results are now fixed: primary-disk-usage, secondary-disk-usage, cluster-time-skew, temp-disk-usage-bytes.

    • Fixed an issue that could cause segments to appear missing in queries, due to the presence of deleted mini-segments with the same target as live mini-segments.

Early Access

  • Automation and Alerts

    • This release includes filter alerts in Early Access. Filter alerts aim to replace existing alerts for use cases where the query does not contain any aggregates.

      Filter alerts:

      • Trigger on individual events and send notifications per event.

      • Guarantee at-least-once delivery of events to actions, within the limits described below.

      • Currently only support delays (ingest delays + delays in actions) of 1 hour and limit the number of notifications to 15 per minute per alert. Before going out of Public GA, those limits will be raised.

      For more information, see Alerts.

Falcon LogScale 1.94.1 LTS (2023-10-28)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.94.1LTS2023-10-28

Cloud

2024-07-31No1.44.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.94.1/server-1.94.1.tar.gz

These notes include entries from the following previous releases: 1.94.0

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.

Removed

Items that have been removed as of this release.

API

  • Degrade and deprecate some REST and GraphQL APIs due to the introduction of AutomaticSegmentDistribution and AutomaticDigesterDistribution. The deprecated elements will be removed in a future release, once the upgrade compatibility with version 1.88.0 is dropped. We expect this to be no earlier than September 2023.

    The following REST endpoints are deprecated, as they no longer have an effect and return meaningless results:

    • api/v1/clusterconfig/segments/prune-replicas

    • api/v1/clusterconfig/segments/distribute-evenly

    • api/v1/clusterconfig/segments/distribute-evenly-reshuffle-all

    • api/v1/clusterconfig/segments/distribute-evenly-to-host

    • api/v1/clusterconfig/segments/distribute-evenly-from-host

    • api/v1/clusterconfig/segments/partitions

    • api/v1/clusterconfig/segments/partitions/setdefaults

    • api/v1/clusterconfig/segments/set-replication-defaults

    • api/v1/clusterconfig/partitions/setdefaults

    • api/v1/clusterconfig/ingestpartitions/distribute-evenly-from-host

    • api/v1/clusterconfig/ingestpartitions/setdefaults

    • api/v1/clusterconfig/ingestpartitions (POST only, GET will continue to work)

    The following GraphQL mutations are deprecated, as they no longer have an effect and return meaningless results:

    • startDataRedistribution

    • updateStoragePartitionScheme

    The IngestPartitionScheme mutation is not deprecated, but as it updates state that is overwritten by automation, we recommend against using it — it exists solely to serve as a debugging tool.

    The following GraphQL fields on the cluster object are deprecated, and return meaningless values:

    • ingestPartitionsWarnings

    • suggestedIngestPartitions

    • storagePartitions

    • storagePartitionsWarnings

    • suggestedStoragePartitions

    • storageDivergence

    • reapply_targetSize

    The following fields in the return value of the api/v1/clusterconfig/segments/segment-stats endpoint are deprecated and degraded to always be O:

    • reapply_targetBytes

    • reapply_targetSegments

    • reapply_inboundBytes

    • reapply_inboundSegments

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Storage

    • Be less aggressive updating the digest partitions when a node goes offline. When a node goes offline/online, creating a well balanced table can require changes to partitions other than those where the changed node appears. This can cause more digest reassignment that we'd like, so we're changing the behavior of the automation. We'll now only generate optimally balanced tables in reaction to nodes being registered or unregistered from the cluster, and in reaction to the digest replication factor changing. The rest of the time, we'll take the previously generated balanced table as a starting point, and do very minimal node replacements in it to ensure partitions are properly replicated to live nodes.

    • It is no longer allowed for nodes to delete bucketed mini-segments involved in queries off local disks before the queries are done. This should help ensure queries do not "miss" querying these files if they are deleted while a query is running.

    • Metadata on segments in memory is now represented in a manner that requires less memory at runtime after booting. The heap required for global snapshot is in the range 3-6 times the size of the disk, for a cluster with many segments. This change reduces the memory requirements for long retention compared to previous versions. Note that for a short time during boot of a node the memory requirement is closer to 10-15 times the size of the snapshot on disk.

  • Configuration

New features and improvements

  • UI Changes

    • A new tutorial built on a dedicated demo data view is available for environments that do not have access to legacy tutorial based on a sandbox repository.

    • The DeleteRepositoryOrView data permission is now visible in the UI on Cloud environments.

    • The Time Selector now only allows zooming out to approximately 4,000 years.

    • The ChangeRetention data permission is now enabled on Cloud environments.

    • When reaching the default capped output in table() and sort() query functions, a warning now suggests you can set a new value using the limit parameter.

  • Documentation

    • LogScale Kubernetes Reference Architecture new page has been added with LogScale reference architecture description when deploying LogScale using Kubernetes.

    • Regular Expression Syntax new page has been added with extended details of supported regular expression syntax and differences between the LogScale support and other implementations such as Java and Perl.

  • Automation and Alerts

    • The Alert and Scheduled Search jobs no longer produce logs about specific alerts or scheduled searches in the humio repository. The logs are still sent to the humio-activity repository, which in normal setup is also ingested into the humio repository. So before, the logs would normally be duplicated, now they are not. The only difference between the two types of logs, is that the logs from the humio-activity repository all have loglevel equal to INFO. You can use the severity field instead to distinguish between the severity of the logs.

    • The possibility to mark alerts and scheduled searches as favorites has been removed.

    • Improvements in the layout of Alerts and Scheduled Searches, which now have updated forms.

    • The Actions overview now has quick filters for showing only actions of specific types.

    • The Scheduled Searches overview now shows the status of scheduled searches with a colored dot to make it easy to spot failing scheduled searches.

    • Improvements in the Alerts and Scheduled Searches permissions, which are now renamed to Run on behalf of, and have a more clarifying help text.

    • The Alerts overview now has quick filters for showing only standard alerts or filter alerts. It also shows the status of alerts with a colored dot to make it easy to spot failing alerts.

  • GraphQL API

    • The Usage page has been updated to support queries that are in progress for longer than the GraphQL timeout allows.

    • The semantics of the field SolitarySegmentSize on the ClusterNode datatype has changed from counting bytes that only exist on that node and which have been underreplicated for a while, to counting bytes that only exist on that node.

    • The GraphQL schema for UsageStats has been updated to reflect that queries can be in progress.

    • Mutations enableAlert and disableAlert have been added for enabling and disabling an alert without changing other fields.

  • Configuration

  • Dashboards and Widgets

    • New parsing of Template Expressions has been implemented in the UI for improved performance.

    • When creating or editing interactions you can now visualize any unused parameter bindings, with the option to remove them.

      For more information, see Unused parameters bindings.

    • Improved performance on the Search page, especially when events contain large JSON objects.

      A new limit of 49 series has been set when using the wide format data (one field per series) in the Scatter Chart Widget (the first field is always the x axis). No such limit applies to long format data (series defined by one groupby column).

    • The empty list alias is now available as an input option for parameter bindings, so that Multi-value Parameters can be set explicitly to have the value of an empty list.

      For more information, see Empty list alias.

    • Parameter labels are now used instead of parameter IDs when displaying the list of parameters that a widget / query is waiting on.

  • Ingestion

    • Parser timeouts have been changed to take thread time into account. This should make parsers more resilient to long Garbage Collector stalls.

      For more information, see Parser Timeout.

  • Log Collector

    • Added a new test status for configurations, which allows you to try out a configuration on one or more instances before it's published.

      For more information, see Test a Remote Configuration.

  • Functions

    • Performance improvements when using regex() function or regex syntax.

    • In parseTimestamp() function, special format specifiers like seconds are now recognized independently of capitalization to allow case-insensitive match.

  • Other

    • Reduced the amount of memory used when multiple queries use the match() function with the same arguments. Before, if you ran many queries that used the same file, the contents of the file would be represented multiple times in memory, once for each query. This could put you at risk of exhausting the server's memory if the files were large. With this change the file contents will be shared between the queries and represented only once. This enables the server to run more queries and/or handle larger files.

      For more information, see Lookup Files Operations.

    • When the Kafka broker set changes at runtime, track that set and use as bootstrap servers for Kafka whenever LogScale needs to create a new Kafka client at runtime. This allows replacing all Kafka brokers (incrementally, moving their work to new servers) without restarting LogScale. Note that the set is not persisted across restart of LogScale, so when restarting LogScale, make sure to provide an up to date set of bootstrap servers.

    • The following cluster management features are now enabled:

      • AutomaticJobDistribution

      • AutomaticDigesterDistribution

      • AutomaticSegmentDistribution

      For more information, see Digest Rules.

Fixed in this release

  • UI Changes

    • Turned off the light bulb in the query editor as it was causing technical issues.

    • Fixed an issue where the filter would remain applied in the saved or recent queries when switching tabs in the Queries menu.

    • Time Selector and date picker in the Time Interval panel have been fixed for issues related to daylight savings time.

    • Fixed the order of the timezones in the timezone dropdown on the Search and Dashboards pages.

    • An error for lacking permissions that appeared when updating the organization settings has been fixed. Now, if you have permissions to view the Organization Settings page, you can also update information on it.

  • Automation and Alerts

    • The throttle field would be empty when editing an Alert; this issue has now been fixed.

    • Fixed an issue where clicking the Inspect link in Alert notifications would land on a missing page.

    • Fixed an issue that could cause some rarely occurring errors when running alerts to not show up on the alert.

  • Dashboards and Widgets

    • Labels of FixedList Parameter parameters values have been fixed, so that they default to the value instead of rendering empty string.

    • Fixed an issue where certain widget options would be ignored when importing a dashboard template or installing a package.

    • The following issues have been fixed on dashboards:

      • A dashboard would sometimes be perceived as changed on the server even though it was not.

      • Discard unsaved changes would appear when creating and applying new parameters.

    • Fixed the Manage interactions page where Event List Interactions were not scrollable.

    • Fixed a wrong behaviour on the Interactions overview page when creating a new interaction: if the interaction panel was opened, the repository options would dropdown in it instead of in the Create new interaction dialog.

  • Queries

    • An edge case has been fixed where query workers could fail to include mini-segments if the mini-segments were merged at a bad time, causing queries to be missing the data in those segments.

  • Functions

    • The select() function has been fixed as it wasn't preserving tags.

    • The format() has been fixed as the combination of the hexadecimal modifier combined with grouping would not always work.

    • The rename() function would drop the field, if the field and as arguments were identical; this issue has now been fixed.

    • The regex engine has been fixed for issues impacting nested repeats and giving false negatives, as in expressions such as (x{2}:){3}.

  • Other

    • Some merged segments could temporarily be missing from query results right after an ephemeral node reboot. This issue has been fixed.

    • The following Node-Level Metrics that showed incorrect results are now fixed: primary-disk-usage, secondary-disk-usage, cluster-time-skew, temp-disk-usage-bytes.

    • Fixed an issue that could cause segments to appear missing in queries, due to the presence of deleted mini-segments with the same target as live mini-segments.

Early Access

  • Automation and Alerts

    • This release includes filter alerts in Early Access. Filter alerts aim to replace existing alerts for use cases where the query does not contain any aggregates.

      Filter alerts:

      • Trigger on individual events and send notifications per event.

      • Guarantee at-least-once delivery of events to actions, within the limits described below.

      • Currently only support delays (ingest delays + delays in actions) of 1 hour and limit the number of notifications to 15 per minute per alert. Before going out of Public GA, those limits will be raised.

      For more information, see Alerts.

Falcon LogScale 1.94.0 LTS (2023-07-05)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.94.0LTS2023-07-05

Cloud

2024-07-31No1.44.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.94.0/server-1.94.0.tar.gz

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.

Removed

Items that have been removed as of this release.

API

  • Degrade and deprecate some REST and GraphQL APIs due to the introduction of AutomaticSegmentDistribution and AutomaticDigesterDistribution. The deprecated elements will be removed in a future release, once the upgrade compatibility with version 1.88.0 is dropped. We expect this to be no earlier than September 2023.

    The following REST endpoints are deprecated, as they no longer have an effect and return meaningless results:

    • api/v1/clusterconfig/segments/prune-replicas

    • api/v1/clusterconfig/segments/distribute-evenly

    • api/v1/clusterconfig/segments/distribute-evenly-reshuffle-all

    • api/v1/clusterconfig/segments/distribute-evenly-to-host

    • api/v1/clusterconfig/segments/distribute-evenly-from-host

    • api/v1/clusterconfig/segments/partitions

    • api/v1/clusterconfig/segments/partitions/setdefaults

    • api/v1/clusterconfig/segments/set-replication-defaults

    • api/v1/clusterconfig/partitions/setdefaults

    • api/v1/clusterconfig/ingestpartitions/distribute-evenly-from-host

    • api/v1/clusterconfig/ingestpartitions/setdefaults

    • api/v1/clusterconfig/ingestpartitions (POST only, GET will continue to work)

    The following GraphQL mutations are deprecated, as they no longer have an effect and return meaningless results:

    • startDataRedistribution

    • updateStoragePartitionScheme

    The IngestPartitionScheme mutation is not deprecated, but as it updates state that is overwritten by automation, we recommend against using it — it exists solely to serve as a debugging tool.

    The following GraphQL fields on the cluster object are deprecated, and return meaningless values:

    • ingestPartitionsWarnings

    • suggestedIngestPartitions

    • storagePartitions

    • storagePartitionsWarnings

    • suggestedStoragePartitions

    • storageDivergence

    • reapply_targetSize

    The following fields in the return value of the api/v1/clusterconfig/segments/segment-stats endpoint are deprecated and degraded to always be O:

    • reapply_targetBytes

    • reapply_targetSegments

    • reapply_inboundBytes

    • reapply_inboundSegments

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Storage

    • Be less aggressive updating the digest partitions when a node goes offline. When a node goes offline/online, creating a well balanced table can require changes to partitions other than those where the changed node appears. This can cause more digest reassignment that we'd like, so we're changing the behavior of the automation. We'll now only generate optimally balanced tables in reaction to nodes being registered or unregistered from the cluster, and in reaction to the digest replication factor changing. The rest of the time, we'll take the previously generated balanced table as a starting point, and do very minimal node replacements in it to ensure partitions are properly replicated to live nodes.

    • It is no longer allowed for nodes to delete bucketed mini-segments involved in queries off local disks before the queries are done. This should help ensure queries do not "miss" querying these files if they are deleted while a query is running.

    • Metadata on segments in memory is now represented in a manner that requires less memory at runtime after booting. The heap required for global snapshot is in the range 3-6 times the size of the disk, for a cluster with many segments. This change reduces the memory requirements for long retention compared to previous versions. Note that for a short time during boot of a node the memory requirement is closer to 10-15 times the size of the snapshot on disk.

  • Configuration

New features and improvements

  • UI Changes

    • A new tutorial built on a dedicated demo data view is available for environments that do not have access to legacy tutorial based on a sandbox repository.

    • The DeleteRepositoryOrView data permission is now visible in the UI on Cloud environments.

    • The Time Selector now only allows zooming out to approximately 4,000 years.

    • The ChangeRetention data permission is now enabled on Cloud environments.

    • When reaching the default capped output in table() and sort() query functions, a warning now suggests you can set a new value using the limit parameter.

  • Documentation

    • LogScale Kubernetes Reference Architecture new page has been added with LogScale reference architecture description when deploying LogScale using Kubernetes.

    • Regular Expression Syntax new page has been added with extended details of supported regular expression syntax and differences between the LogScale support and other implementations such as Java and Perl.

  • Automation and Alerts

    • The Alert and Scheduled Search jobs no longer produce logs about specific alerts or scheduled searches in the humio repository. The logs are still sent to the humio-activity repository, which in normal setup is also ingested into the humio repository. So before, the logs would normally be duplicated, now they are not. The only difference between the two types of logs, is that the logs from the humio-activity repository all have loglevel equal to INFO. You can use the severity field instead to distinguish between the severity of the logs.

    • The possibility to mark alerts and scheduled searches as favorites has been removed.

    • Improvements in the layout of Alerts and Scheduled Searches, which now have updated forms.

    • The Actions overview now has quick filters for showing only actions of specific types.

    • The Scheduled Searches overview now shows the status of scheduled searches with a colored dot to make it easy to spot failing scheduled searches.

    • Improvements in the Alerts and Scheduled Searches permissions, which are now renamed to Run on behalf of, and have a more clarifying help text.

    • The Alerts overview now has quick filters for showing only standard alerts or filter alerts. It also shows the status of alerts with a colored dot to make it easy to spot failing alerts.

  • GraphQL API

    • The Usage page has been updated to support queries that are in progress for longer than the GraphQL timeout allows.

    • The semantics of the field SolitarySegmentSize on the ClusterNode datatype has changed from counting bytes that only exist on that node and which have been underreplicated for a while, to counting bytes that only exist on that node.

    • The GraphQL schema for UsageStats has been updated to reflect that queries can be in progress.

    • Mutations enableAlert and disableAlert have been added for enabling and disabling an alert without changing other fields.

  • Configuration

  • Dashboards and Widgets

    • New parsing of Template Expressions has been implemented in the UI for improved performance.

    • When creating or editing interactions you can now visualize any unused parameter bindings, with the option to remove them.

      For more information, see Unused parameters bindings.

    • Improved performance on the Search page, especially when events contain large JSON objects.

      A new limit of 49 series has been set when using the wide format data (one field per series) in the Scatter Chart Widget (the first field is always the x axis). No such limit applies to long format data (series defined by one groupby column).

    • The empty list alias is now available as an input option for parameter bindings, so that Multi-value Parameters can be set explicitly to have the value of an empty list.

      For more information, see Empty list alias.

    • Parameter labels are now used instead of parameter IDs when displaying the list of parameters that a widget / query is waiting on.

  • Ingestion

    • Parser timeouts have been changed to take thread time into account. This should make parsers more resilient to long Garbage Collector stalls.

      For more information, see Parser Timeout.

  • Log Collector

    • Added a new test status for configurations, which allows you to try out a configuration on one or more instances before it's published.

      For more information, see Test a Remote Configuration.

  • Functions

    • Performance improvements when using regex() function or regex syntax.

    • In parseTimestamp() function, special format specifiers like seconds are now recognized independently of capitalization to allow case-insensitive match.

  • Other

    • Reduced the amount of memory used when multiple queries use the match() function with the same arguments. Before, if you ran many queries that used the same file, the contents of the file would be represented multiple times in memory, once for each query. This could put you at risk of exhausting the server's memory if the files were large. With this change the file contents will be shared between the queries and represented only once. This enables the server to run more queries and/or handle larger files.

      For more information, see Lookup Files Operations.

    • When the Kafka broker set changes at runtime, track that set and use as bootstrap servers for Kafka whenever LogScale needs to create a new Kafka client at runtime. This allows replacing all Kafka brokers (incrementally, moving their work to new servers) without restarting LogScale. Note that the set is not persisted across restart of LogScale, so when restarting LogScale, make sure to provide an up to date set of bootstrap servers.

    • The following cluster management features are now enabled:

      • AutomaticJobDistribution

      • AutomaticDigesterDistribution

      • AutomaticSegmentDistribution

      For more information, see Digest Rules.

Fixed in this release

  • UI Changes

    • Turned off the light bulb in the query editor as it was causing technical issues.

    • Fixed an issue where the filter would remain applied in the saved or recent queries when switching tabs in the Queries menu.

    • Fixed the order of the timezones in the timezone dropdown on the Search and Dashboards pages.

    • An error for lacking permissions that appeared when updating the organization settings has been fixed. Now, if you have permissions to view the Organization Settings page, you can also update information on it.

  • Automation and Alerts

    • The throttle field would be empty when editing an Alert; this issue has now been fixed.

    • Fixed an issue where clicking the Inspect link in Alert notifications would land on a missing page.

    • Fixed an issue that could cause some rarely occurring errors when running alerts to not show up on the alert.

  • Dashboards and Widgets

    • Labels of FixedList Parameter parameters values have been fixed, so that they default to the value instead of rendering empty string.

    • Fixed an issue where certain widget options would be ignored when importing a dashboard template or installing a package.

    • The following issues have been fixed on dashboards:

      • A dashboard would sometimes be perceived as changed on the server even though it was not.

      • Discard unsaved changes would appear when creating and applying new parameters.

    • Fixed the Manage interactions page where Event List Interactions were not scrollable.

    • Fixed a wrong behaviour on the Interactions overview page when creating a new interaction: if the interaction panel was opened, the repository options would dropdown in it instead of in the Create new interaction dialog.

  • Queries

    • An edge case has been fixed where query workers could fail to include mini-segments if the mini-segments were merged at a bad time, causing queries to be missing the data in those segments.

  • Functions

    • The select() function has been fixed as it wasn't preserving tags.

    • The format() has been fixed as the combination of the hexadecimal modifier combined with grouping would not always work.

    • The rename() function would drop the field, if the field and as arguments were identical; this issue has now been fixed.

    • The regex engine has been fixed for issues impacting nested repeats and giving false negatives, as in expressions such as (x{2}:){3}.

  • Other

    • Some merged segments could temporarily be missing from query results right after an ephemeral node reboot. This issue has been fixed.

    • The following Node-Level Metrics that showed incorrect results are now fixed: primary-disk-usage, secondary-disk-usage, cluster-time-skew, temp-disk-usage-bytes.

    • Fixed an issue that could cause segments to appear missing in queries, due to the presence of deleted mini-segments with the same target as live mini-segments.

Early Access

  • Automation and Alerts

    • This release includes filter alerts in Early Access. Filter alerts aim to replace existing alerts for use cases where the query does not contain any aggregates.

      Filter alerts:

      • Trigger on individual events and send notifications per event.

      • Guarantee at-least-once delivery of events to actions, within the limits described below.

      • Currently only support delays (ingest delays + delays in actions) of 1 hour and limit the number of notifications to 15 per minute per alert. Before going out of Public GA, those limits will be raised.

      For more information, see Alerts.

Falcon LogScale 1.93.0 GA (2023-06-06)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.93.0GA2023-06-06

Cloud

2024-07-31No1.44.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.

New features and improvements

  • Automation and Alerts

    • The possibility to mark alerts and scheduled searches as favorites has been removed.

    • Improvements in the layout of Alerts and Scheduled Searches, which now have updated forms.

    • The Actions overview now has quick filters for showing only actions of specific types.

    • The Scheduled Searches overview now shows the status of scheduled searches with a colored dot to make it easy to spot failing scheduled searches.

    • Improvements in the Alerts and Scheduled Searches permissions, which are now renamed to Run on behalf of, and have a more clarifying help text.

    • The Alerts overview now has quick filters for showing only standard alerts or filter alerts. It also shows the status of alerts with a colored dot to make it easy to spot failing alerts.

  • GraphQL API

    • The semantics of the field SolitarySegmentSize on the ClusterNode datatype has changed from counting bytes that only exist on that node and which have been underreplicated for a while, to counting bytes that only exist on that node.

  • Dashboards and Widgets

    • Improved performance on the Search page, especially when events contain large JSON objects.

      A new limit of 49 series has been set when using the wide format data (one field per series) in the Scatter Chart Widget (the first field is always the x axis). No such limit applies to long format data (series defined by one groupby column).

  • Ingestion

    • Parser timeouts have been changed to take thread time into account. This should make parsers more resilient to long Garbage Collector stalls.

      For more information, see Parser Timeout.

Fixed in this release

  • Dashboards and Widgets

    • Labels of FixedList Parameter parameters values have been fixed, so that they default to the value instead of rendering empty string.

  • Functions

    • The format() has been fixed as the combination of the hexadecimal modifier combined with grouping would not always work.

Early Access

  • Automation and Alerts

    • This release includes filter alerts in Early Access. Filter alerts aim to replace existing alerts for use cases where the query does not contain any aggregates.

      Filter alerts:

      • Trigger on individual events and send notifications per event.

      • Guarantee at-least-once delivery of events to actions, within the limits described below.

      • Currently only support delays (ingest delays + delays in actions) of 1 hour and limit the number of notifications to 15 per minute per alert. Before going out of Public GA, those limits will be raised.

      For more information, see Alerts.

Falcon LogScale 1.92.0 GA (2023-05-30)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.92.0GA2023-05-30

Cloud

2024-07-31No1.44.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Storage

    • Be less aggressive updating the digest partitions when a node goes offline. When a node goes offline/online, creating a well balanced table can require changes to partitions other than those where the changed node appears. This can cause more digest reassignment that we'd like, so we're changing the behavior of the automation. We'll now only generate optimally balanced tables in reaction to nodes being registered or unregistered from the cluster, and in reaction to the digest replication factor changing. The rest of the time, we'll take the previously generated balanced table as a starting point, and do very minimal node replacements in it to ensure partitions are properly replicated to live nodes.

    • It is no longer allowed for nodes to delete bucketed mini-segments involved in queries off local disks before the queries are done. This should help ensure queries do not "miss" querying these files if they are deleted while a query is running.

    • Metadata on segments in memory is now represented in a manner that requires less memory at runtime after booting. The heap required for global snapshot is in the range 3-6 times the size of the disk, for a cluster with many segments. This change reduces the memory requirements for long retention compared to previous versions. Note that for a short time during boot of a node the memory requirement is closer to 10-15 times the size of the snapshot on disk.

  • Configuration

New features and improvements

  • UI Changes

    • A new tutorial built on a dedicated demo data view is available for environments that do not have access to legacy tutorial based on a sandbox repository.

    • The DeleteRepositoryOrView data permission is now visible in the UI on Cloud environments.

    • The Time Selector now only allows zooming out to approximately 4,000 years.

    • The ChangeRetention data permission is now enabled on Cloud environments.

  • Documentation

    • LogScale Kubernetes Reference Architecture new page has been added with LogScale reference architecture description when deploying LogScale using Kubernetes.

    • Regular Expression Syntax new page has been added with extended details of supported regular expression syntax and differences between the LogScale support and other implementations such as Java and Perl.

  • GraphQL API

    • The Usage page has been updated to support queries that are in progress for longer than the GraphQL timeout allows.

    • The GraphQL schema for UsageStats has been updated to reflect that queries can be in progress.

  • Configuration

  • Dashboards and Widgets

    • New parsing of Template Expressions has been implemented in the UI for improved performance.

    • When creating or editing interactions you can now visualize any unused parameter bindings, with the option to remove them.

      For more information, see Unused parameters bindings.

    • The empty list alias is now available as an input option for parameter bindings, so that Multi-value Parameters can be set explicitly to have the value of an empty list.

      For more information, see Empty list alias.

    • Parameter labels are now used instead of parameter IDs when displaying the list of parameters that a widget / query is waiting on.

  • Queries

    • Polling a query on /queryjobs can now delay the response a bit in order to allow returning a potentially done response. The typical effective delay is less than 2 seconds, and the positive effect is saving the extra poll roundtrip that would otherwise need to happen before the query completed. This in particular makes simple queries complete faster from the viewpoint of the client, as they do not have to wait for an extra poll roundtrip in most cases.

  • Other

    • Reduced the amount of memory used when multiple queries use the match() function with the same arguments. Before, if you ran many queries that used the same file, the contents of the file would be represented multiple times in memory, once for each query. This could put you at risk of exhausting the server's memory if the files were large. With this change the file contents will be shared between the queries and represented only once. This enables the server to run more queries and/or handle larger files.

      For more information, see Lookup Files Operations.

    • When the Kafka broker set changes at runtime, track that set and use as bootstrap servers for Kafka whenever LogScale needs to create a new Kafka client at runtime. This allows replacing all Kafka brokers (incrementally, moving their work to new servers) without restarting LogScale. Note that the set is not persisted across restart of LogScale, so when restarting LogScale, make sure to provide an up to date set of bootstrap servers.

Fixed in this release

  • Security

    • Verified that LogScale does not use the affected Akka dependency component in CVE-2023-31442 by default, and have taken additional precautions to notify customers.

      For:

      • LogScale Cloud/Falcon Long Term Repository:

        • This CVE does not impact LogScale Cloud or LTR customers.

      • LogScale Self-Hosted:

        • Exposure to risk:

          • Potential risk is only present if a self hosted customer has modified the Akka parameters to a non default value of akka.io.dns.resolver = async-dns during initial setup.

          • By default LogScale does not use this configuration parameter.

          • CrowdStrike has never recommended custom Akka parameters. We recommend using default values for all parameters.

        • Steps to mitigate:

          • Setting akka.io.dns.resolver to default value (inet-address) will mitigate the potential risk.

        • On versions older than 1.92.0:

          • Unset the custom Akka configuration. Refer to Akka documentation for more information on how to unset or pass a different value to the parameter here.

          • CrowdStrike recommends upgrading LogScale to 1.92.x or higher versions.

  • UI Changes

    • Fixed an issue where the filter would remain applied in the saved or recent queries when switching tabs in the Queries menu.

    • Fixed the order of the timezones in the timezone dropdown on the Search and Dashboards pages.

  • Automation and Alerts

    • Fixed an issue that could cause some rarely occurring errors when running alerts to not show up on the alert.

  • Dashboards and Widgets

    • Fixed an issue where certain widget options would be ignored when importing a dashboard template or installing a package.

    • Fixed a wrong behaviour on the Interactions overview page when creating a new interaction: if the interaction panel was opened, the repository options would dropdown in it instead of in the Create new interaction dialog.

  • Other

    • The following Node-Level Metrics that showed incorrect results are now fixed: primary-disk-usage, secondary-disk-usage, cluster-time-skew, temp-disk-usage-bytes.

Falcon LogScale 1.91.0 Not Released (2023-05-23)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.91.0Not Released2023-05-23

Internal Only

2024-05-31No1.44.0No

Available for download two days after release.

Not released.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.

Falcon LogScale 1.90.0 Not Released (2023-05-16)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.90.0Not Released2023-05-16

Internal Only

2024-05-31No1.44.0No

Available for download two days after release.

Not released.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.

Falcon LogScale 1.89.0 GA (2023-05-11)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.89.0GA2023-05-11

Cloud

2024-07-31No1.44.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.

Removed

Items that have been removed as of this release.

API

  • Degrade and deprecate some REST and GraphQL APIs due to the introduction of AutomaticSegmentDistribution and AutomaticDigesterDistribution. The deprecated elements will be removed in a future release, once the upgrade compatibility with version 1.88.0 is dropped. We expect this to be no earlier than September 2023.

    The following REST endpoints are deprecated, as they no longer have an effect and return meaningless results:

    • api/v1/clusterconfig/segments/prune-replicas

    • api/v1/clusterconfig/segments/distribute-evenly

    • api/v1/clusterconfig/segments/distribute-evenly-reshuffle-all

    • api/v1/clusterconfig/segments/distribute-evenly-to-host

    • api/v1/clusterconfig/segments/distribute-evenly-from-host

    • api/v1/clusterconfig/segments/partitions

    • api/v1/clusterconfig/segments/partitions/setdefaults

    • api/v1/clusterconfig/segments/set-replication-defaults

    • api/v1/clusterconfig/partitions/setdefaults

    • api/v1/clusterconfig/ingestpartitions/distribute-evenly-from-host

    • api/v1/clusterconfig/ingestpartitions/setdefaults

    • api/v1/clusterconfig/ingestpartitions (POST only, GET will continue to work)

    The following GraphQL mutations are deprecated, as they no longer have an effect and return meaningless results:

    • startDataRedistribution

    • updateStoragePartitionScheme

    The IngestPartitionScheme mutation is not deprecated, but as it updates state that is overwritten by automation, we recommend against using it — it exists solely to serve as a debugging tool.

    The following GraphQL fields on the cluster object are deprecated, and return meaningless values:

    • ingestPartitionsWarnings

    • suggestedIngestPartitions

    • storagePartitions

    • storagePartitionsWarnings

    • suggestedStoragePartitions

    • storageDivergence

    • reapply_targetSize

    The following fields in the return value of the api/v1/clusterconfig/segments/segment-stats endpoint are deprecated and degraded to always be O:

    • reapply_targetBytes

    • reapply_targetSegments

    • reapply_inboundBytes

    • reapply_inboundSegments

New features and improvements

  • Automation and Alerts

    • The Alert and Scheduled Search jobs no longer produce logs about specific alerts or scheduled searches in the humio repository. The logs are still sent to the humio-activity repository, which in normal setup is also ingested into the humio repository. So before, the logs would normally be duplicated, now they are not. The only difference between the two types of logs, is that the logs from the humio-activity repository all have loglevel equal to INFO. You can use the severity field instead to distinguish between the severity of the logs.

  • GraphQL API

    • Mutations enableAlert and disableAlert have been added for enabling and disabling an alert without changing other fields.

  • Configuration

    • Automatic rebalancing of existing segments onto cluster nodes has been enabled.

      Manual editing of the segment partition table is no longer supported. The table is no longer displayed in the Cluster Administration UI.

      The segments will be distributed onto cluster nodes based on the following node-level settings:

      • ZONE defines a node's zone. The balancing logic will attempt to distribute segment replicas across as many zones as possible.

      • The target disk usage percentage determines how much of the node disk we will consider usable for storing segment data during a rebalance. The balancing logic will attempt to keep nodes equally full, while considering the node zone and segment replication factor. This can be configured via GraphQL using the setTargetDiskUsagePercentage mutation. The default value is 90.

      • Nodes with a NODE_ROLES setting that excludes segment storage will not receive segments as part of a rebalance.

  • Log Collector

    • Added a new test status for configurations, which allows you to try out a configuration on one or more instances before it's published.

      For more information, see Test a Remote Configuration.

  • Other

    • The following cluster management features are now enabled:

      • AutomaticJobDistribution

      • AutomaticDigesterDistribution

      • AutomaticSegmentDistribution

      For more information, see Digest Rules.

Fixed in this release

  • UI Changes

    • The Search page would reload when using the browser's history navigation buttons. This issue has now been fixed.

    • An error for lacking permissions that appeared when updating the organization settings has been fixed. Now, if you have permissions to view the Organization Settings page, you can also update information on it.

  • Automation and Alerts

    • The throttle field would be empty when editing an Alert; this issue has now been fixed.

    • Fixed an issue where clicking the Inspect link in Alert notifications would land on a missing page.

  • Dashboards and Widgets

    • The following issues have been fixed on dashboards:

      • A dashboard would sometimes be perceived as changed on the server even though it was not.

      • Discard unsaved changes would appear when creating and applying new parameters.

  • Queries

    • An edge case has been fixed where query workers could fail to include mini-segments if the mini-segments were merged at a bad time, causing queries to be missing the data in those segments.

  • Functions

    • The rename() function would drop the field, if the field and as arguments were identical; this issue has now been fixed.

  • Other

    • Some merged segments could temporarily be missing from query results right after an ephemeral node reboot. This issue has been fixed.

    • Fixed an issue that could cause segments to appear missing in queries, due to the presence of deleted mini-segments with the same target as live mini-segments.

Falcon LogScale 1.88.2 LTS (2023-07-04)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.88.2LTS2023-07-04

Cloud

2024-05-31No1.44.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.88.2/server-1.88.2.tar.gz

These notes include entries from the following previous releases: 1.88.0, 1.88.1

Bug fix and updates.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Storage

    • It is no longer allowed for nodes to delete bucketed mini-segments involved in queries off local disks before the queries are done. This should help ensure queries do not "miss" querying these files if they are deleted while a query is running.

    • Change how downloads from bucket storage are prioritized for queries. Previously the highest priority query was allowed to download as many segments as it liked. We now try to estimate how much work a query has available in local segments, and prioritize fetching segments for those queries that are close to running out of local work and becoming blocked for that reason.

Upgrades

Changes that may occur or be required during an upgrade.

  • Other

    • Docker images have been upgraded to Java 19.0.2. to address CVE-2022-45688 issue.

    • Snakeyamls has been upgraded to 2.0 to address CVE-2022-1471 issue.

New features and improvements

  • UI Changes

  • Automation and Alerts

    • Clicking the Labels button in Alerts will now show every unique label that has been created on every alert in the same repository. This means that you don't need to rewrite a label when wanting to add the same label to another alert. This feature also applies to Scheduled Searches.

    • The error message for an Alert or Scheduled Search on their edit pages now has a button for clearing the error while the dismiss icon will just close the message but not clear errors.

    • When creating a new Alert, you now have a pulldown menu that suggests labels that you've previously created for other alerts. The same applies to Scheduled Searches.

      For more information, see Creating Alerts.

    • The default time window for Alerts has been updated:

      • When creating an alert from the Alerts page, the default query time window has been changed from 24 Hours to 1 Hours to match the default throttle time.

      • When creating an alert from the Search page, the default Throttle period has been changed to match that of the query time window set.

      For more information, see Creating Alerts.

    • When enabling an Alert or Scheduled Search with no actions, an inline warning message now appears instead of a message box.

  • GraphQL API

  • Configuration

    • New configuration parameters have been added allowing control of client.rack for our Kafka consumers:

      • KAFKA_CLIENT_RACK_ENV_VAR — this variable is read to find the name of the variable that holds the value. It defaults to ZONE, which is the same variable applied to the LogScale node zones by default.

    • Using the storage class "S3 Intelligent-Tiering" in AWS S3 selectively on files that LogScale knows continues to be supported: it is controlled by the new dynamic configuration BucketStorageUploadInfrequentThresholdDays that sets the minimum number of days of remaining retention for the data in order to switch from the default "S3 Standard" to the "Intelligent" tier.

      The decision is made at the point of upload to the bucket only, whereas existing objects in the bucket are not modified.

      The bucket must be configured to not allow the optional tiers Archive Access tier nor Deep Archive Access tier as those do not have instant access, which is required for LogScale.

      As a consequence of that, do not enable automatic archiving within the S3 Intelligent-Tiering storage class.

    • The new configuration parameter SEGMENT_READ_FADVICE has been introduced.

    • The following cluster-level setting has been introduced, editable via GraphQL mutations:

      This is also configurable via the DEFAULT_SEGMENT_REPLICATION_FACTOR configuration parameter.

      If configured via both environment variable and GraphQL mutation, the mutation has precedence.

      For new clusters the default is 1. For clusters upgrading from older versions, the initial value is taken from the STORAGE_REPLICATION_FACTOR environment variable, if set. If instead the variable is not set, the value is taken from the replication factor of the storage partition table prior to the upgrade — this means that upgrading clusters should see no change to their replication factor, unless specified in the STORAGE_REPLICATION_FACTOR .

      The feature can be disabled in case of problems via either the GraphQL mutation setAllowRebalanceExistingSegments, or the environment variable DEFAULT_ALLOW_REBALANCE_EXISTING_SEGMENTS.

      If you need to disable the feature, please reach out to Support and share your concerns so we can try to address them. We intend to remove the option to handle segment partitions manually in the future.

    • Disable the AutomaticDigesterDistribution feature by default. While the feature works, it can cause performance issues on very large installs if nodes are rebooted repeatedly. In future versions, we've worked around this issue, but for 1.88 patch versions, we prefer simply disabling the feature.

  • Dashboards and Widgets

    • When using the Edit in search view item on a dashboard widget, the values set in parameters in the query are also carried over into the search view.

    • Introduced a new setting for dashboard parameters configuration to defer query execution: the dashboard will not execute any queries on page load until the user provides a value to the parameter.

      For more information, see Configuring Dashboard Parameters.

    • The new interaction type Search Link has been introduced, allowing users to create an interaction that will trigger a new search.

      For more information, see Manage Dashboard Interactions, Creating Event List Interactions.

    • You can now save interactions with a saved query on the Search page. Interactions in saved queries are also supported in Packages.

      For more information, see Creating Event List Interactions.

    • The new interaction type Update Parameters has been introduced. This interaction allows you to update parameters in the context you're working in — on the dashboard or on the Search page.

      For more information, see Update Parameters.

    • The combo box has been updated to show multiple selections as "pills".

    • You can now delete or duplicate Event List Interactions from the Interactions overview page.

      For more information, see Deleting & Duplicating Event List Interactions.

    • Multivalued parameters have been introduced to pass an array of values to the query. The support is limited to the Dashboards page.

      For more information, see Multi-value Parameters.

    • When Setting Up a Dashboard Interaction, the {{ startTime }} and {{ endTime }} special variables now work differently, depending on whether the query, widget or dashboard is running in Live mode or not. They now work as follows:

      • In a live query or dashboard, the startTime variable will contain the relative time, such as 2d whereas endTime will be empty.

      • In a non-live query or dashboard, startTime will be the absolute start time when the query was last run. endTime, similarly, will have the end time of when the query was last run.

    • Interactive elements in visualizations now have the point cursor.

  • Log Collector

    • On the Config Overview page a column showing the state of the configuration has been added. The configuration can either be published or in draft state.

      A menu item has been added on the Config Overview page, that links to the Settings page.

      When clicking on an Error status on the Fleet Overview page, a dialog with the error details will open.

      For more information, see Falcon Log Collector Manage your Fleet.

    • Fleet Management updates:

      • Added the Basic Information page with primary information of a specific configuration e.g. name, description, no. of assigned instances.

      • The Config Editor used to create/modify LogScale Collector configurations in LogScale has been augmented with context aware auto-completion, tooltips for keywords and highlighting of invalid settings.

      For more information, see Manage Remote Configurations.

  • Queries

    • Reduced the amount of memory used when multiple queries use the match() function with the same arguments. Before, if you ran many queries that used the same file, the contents of the file would be represented multiple times in memory, once for each query. This could put you at risk of exhausting the server's memory if the files were large. With this change the file contents will be shared between the queries and represented only once. This enables the server to run more queries and/or handle larger files.

      For more information, see Lookup Files Operations.

    • Improvements to query scheduler logic for "shelving" i.e., pausing queries considered too expensive. The pause/unpause logic are now more responsive and unpause queries faster when they become eligible to run.

    • Polling a query on /queryjobs can now delay the response a bit in order to allow returning a potentially done response. The typical effective delay is less than 2 seconds, and the positive effect is saving the extra poll roundtrip that would otherwise need to happen before the query completed. This in particular makes simple queries complete faster from the viewpoint of the client, as they do not have to wait for an extra poll roundtrip in most cases.

  • Functions

    • Performance improvements have been made to the match() query function in cases where ignoreCase=true is used together with either mode=cidr, or mode=string.

    • base64Decode() query function has been updated such that, when decoding to UTF-8, invalid code points are replaced with a placeholder character.

    • When IOCs are not available, the ioc:lookup() query function will now produce an error. Previously, it only produced a warning.

    • The memory usage of the functions selectLast() and groupBy() has been improved.

  • Other

    • When the automatic segment rebalancing feature is enabled, ignore the segment storage table when evaluating whether dead ephemeral nodes can be removed automatically.

    • Create Repositories permission now also allows LogScale Self-Hosted users to create repositories.

  • Packages

    • The size limit of packages' lookup files has been changed to adhere to the MAX_FILEUPLOAD_SIZE configuration parameter. Previously the size limit was 1MB.

      For more information, see Exporting the Package.

Fixed in this release

  • Security

    • Verified that LogScale does not use the affected Akka dependency component in CVE-2023-31442 by default, and have taken additional precautions to notify customers.

      For:

      • LogScale Cloud/Falcon Long Term Repository:

        • This CVE does not impact LogScale Cloud or LTR customers.

      • LogScale Self-Hosted:

        • Exposure to risk:

          • Potential risk is only present if a self hosted customer has modified the Akka parameters to a non default value of akka.io.dns.resolver = async-dns during initial setup.

          • By default LogScale does not use this configuration parameter.

          • CrowdStrike has never recommended custom Akka parameters. We recommend using default values for all parameters.

        • Steps to mitigate:

          • Setting akka.io.dns.resolver to default value (inet-address) will mitigate the potential risk.

        • On versions older than 1.92.0:

          • Unset the custom Akka configuration. Refer to Akka documentation for more information on how to unset or pass a different value to the parameter here.

          • CrowdStrike recommends upgrading LogScale to 1.92.x or higher versions.

  • UI Changes

    • The Search page would reload when using the browser's history navigation buttons. This issue has now been fixed.

    • An issue in the Usage page that could fail showing any data has been fixed.

      The Usage page now shows an error if there are any warnings from the query.

    • The Fields Panel flyout displayed the bottom 10 values rather than the top 10 values. This issue has now been fixed.

      For more information, see Displaying Fields.

  • Dashboards and Widgets

    • "" was being discarded when creating URLs for interactions. This issue has now been fixed.

    • Attempting to remove a widget on a dashboard would sometimes remove another widget than the one attempted to remove. This issue has been fixed.

    • The tooltip in the Time Chart widget would not show any data points. This issue has now been fixed.

    • Non-breaking space chars (ALT+Space) made Template Expressions unable to be resolved. This issue has been fixed.

    • '_' was not recognized as a valid first symbol for parameters when parsing queries. This issue has now been fixed.

    • Fixed an issue where clicking the Inspect link in alert notifications would land on a missing page.

    • The values of FixedList Parameter on a dashboard would change sort ordering after being exported to a yaml template file. This issue has been fixed.

  • Queries

    • In clusters with bucket storage running queries that take more than 90 minutes, those queries could spuriously fail with a complaint that segments were missing. The issue has now been fixed.

    • Export query result to file dialog would not close in some cases. This issue has now been fixed.

    • Restart of queries based on lookup files has been fixed: only live queries need restarting from changes to uploaded files that they depend on. Scheduled Searches and static queries use the version of the file present when they start and run to completion.

  • Functions

  • Other

    • An issue that would cause query workers to handle mini-segments for longer than intended has been fixed.

    • The following audit log issues have been fixed:

      • the audit log logged the name of the view owning the view bindings instead of the repository it links to. The name now matches the id in the binding log entry.

      • the audit log for a view update did not use the updated view but the view data before the update.

    • An uploaded file would sometimes disappear immediately after uploading. This issue has been fixed.

    • An issue that would cause bucket downloads to retry infinitely many times for certain types of segments has been fixed.

    • Fixed an issue where searching within small subsets of the latest 24 hours in combination with hash filters could result in events that belonged in the time range to not be included in the result. The visible symptom was that narrowing the search span provided more hits.

    • Fixed bucket downloads that could fail if the segment they were fetching disappeared from global.

    • In ephemeral-disk mode, allow removing a node via the UI when it is dead regardless of any data present on the node: ephemeral mode knows how to ensure durability also when nodes are lost without notice.

      For more information, see Ephemeral Nodes and Cluster Identity.

Falcon LogScale 1.88.1 LTS (2023-06-22)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.88.1LTS2023-06-22

Cloud

2024-05-31No1.44.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.88.1/server-1.88.1.tar.gz

These notes include entries from the following previous releases: 1.88.0

Security fixes.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Storage

    • It is no longer allowed for nodes to delete bucketed mini-segments involved in queries off local disks before the queries are done. This should help ensure queries do not "miss" querying these files if they are deleted while a query is running.

    • Change how downloads from bucket storage are prioritized for queries. Previously the highest priority query was allowed to download as many segments as it liked. We now try to estimate how much work a query has available in local segments, and prioritize fetching segments for those queries that are close to running out of local work and becoming blocked for that reason.

Upgrades

Changes that may occur or be required during an upgrade.

  • Other

    • Docker images have been upgraded to Java 19.0.2. to address CVE-2022-45688 issue.

    • Snakeyamls has been upgraded to 2.0 to address CVE-2022-1471 issue.

New features and improvements

  • UI Changes

  • Automation and Alerts

    • Clicking the Labels button in Alerts will now show every unique label that has been created on every alert in the same repository. This means that you don't need to rewrite a label when wanting to add the same label to another alert. This feature also applies to Scheduled Searches.

    • The error message for an Alert or Scheduled Search on their edit pages now has a button for clearing the error while the dismiss icon will just close the message but not clear errors.

    • When creating a new Alert, you now have a pulldown menu that suggests labels that you've previously created for other alerts. The same applies to Scheduled Searches.

      For more information, see Creating Alerts.

    • The default time window for Alerts has been updated:

      • When creating an alert from the Alerts page, the default query time window has been changed from 24 Hours to 1 Hours to match the default throttle time.

      • When creating an alert from the Search page, the default Throttle period has been changed to match that of the query time window set.

      For more information, see Creating Alerts.

    • When enabling an Alert or Scheduled Search with no actions, an inline warning message now appears instead of a message box.

  • GraphQL API

  • Configuration

    • New configuration parameters have been added allowing control of client.rack for our Kafka consumers:

      • KAFKA_CLIENT_RACK_ENV_VAR — this variable is read to find the name of the variable that holds the value. It defaults to ZONE, which is the same variable applied to the LogScale node zones by default.

    • Using the storage class "S3 Intelligent-Tiering" in AWS S3 selectively on files that LogScale knows continues to be supported: it is controlled by the new dynamic configuration BucketStorageUploadInfrequentThresholdDays that sets the minimum number of days of remaining retention for the data in order to switch from the default "S3 Standard" to the "Intelligent" tier.

      The decision is made at the point of upload to the bucket only, whereas existing objects in the bucket are not modified.

      The bucket must be configured to not allow the optional tiers Archive Access tier nor Deep Archive Access tier as those do not have instant access, which is required for LogScale.

      As a consequence of that, do not enable automatic archiving within the S3 Intelligent-Tiering storage class.

    • The new configuration parameter SEGMENT_READ_FADVICE has been introduced.

    • The following cluster-level setting has been introduced, editable via GraphQL mutations:

      This is also configurable via the DEFAULT_SEGMENT_REPLICATION_FACTOR configuration parameter.

      If configured via both environment variable and GraphQL mutation, the mutation has precedence.

      For new clusters the default is 1. For clusters upgrading from older versions, the initial value is taken from the STORAGE_REPLICATION_FACTOR environment variable, if set. If instead the variable is not set, the value is taken from the replication factor of the storage partition table prior to the upgrade — this means that upgrading clusters should see no change to their replication factor, unless specified in the STORAGE_REPLICATION_FACTOR .

      The feature can be disabled in case of problems via either the GraphQL mutation setAllowRebalanceExistingSegments, or the environment variable DEFAULT_ALLOW_REBALANCE_EXISTING_SEGMENTS.

      If you need to disable the feature, please reach out to Support and share your concerns so we can try to address them. We intend to remove the option to handle segment partitions manually in the future.

    • Disable the AutomaticDigesterDistribution feature by default. While the feature works, it can cause performance issues on very large installs if nodes are rebooted repeatedly. In future versions, we've worked around this issue, but for 1.88 patch versions, we prefer simply disabling the feature.

  • Dashboards and Widgets

    • When using the Edit in search view item on a dashboard widget, the values set in parameters in the query are also carried over into the search view.

    • Introduced a new setting for dashboard parameters configuration to defer query execution: the dashboard will not execute any queries on page load until the user provides a value to the parameter.

      For more information, see Configuring Dashboard Parameters.

    • The new interaction type Search Link has been introduced, allowing users to create an interaction that will trigger a new search.

      For more information, see Manage Dashboard Interactions, Creating Event List Interactions.

    • You can now save interactions with a saved query on the Search page. Interactions in saved queries are also supported in Packages.

      For more information, see Creating Event List Interactions.

    • The new interaction type Update Parameters has been introduced. This interaction allows you to update parameters in the context you're working in — on the dashboard or on the Search page.

      For more information, see Update Parameters.

    • The combo box has been updated to show multiple selections as "pills".

    • You can now delete or duplicate Event List Interactions from the Interactions overview page.

      For more information, see Deleting & Duplicating Event List Interactions.

    • Multivalued parameters have been introduced to pass an array of values to the query. The support is limited to the Dashboards page.

      For more information, see Multi-value Parameters.

    • When Setting Up a Dashboard Interaction, the {{ startTime }} and {{ endTime }} special variables now work differently, depending on whether the query, widget or dashboard is running in Live mode or not. They now work as follows:

      • In a live query or dashboard, the startTime variable will contain the relative time, such as 2d whereas endTime will be empty.

      • In a non-live query or dashboard, startTime will be the absolute start time when the query was last run. endTime, similarly, will have the end time of when the query was last run.

    • Interactive elements in visualizations now have the point cursor.

  • Log Collector

    • On the Config Overview page a column showing the state of the configuration has been added. The configuration can either be published or in draft state.

      A menu item has been added on the Config Overview page, that links to the Settings page.

      When clicking on an Error status on the Fleet Overview page, a dialog with the error details will open.

      For more information, see Falcon Log Collector Manage your Fleet.

    • Fleet Management updates:

      • Added the Basic Information page with primary information of a specific configuration e.g. name, description, no. of assigned instances.

      • The Config Editor used to create/modify LogScale Collector configurations in LogScale has been augmented with context aware auto-completion, tooltips for keywords and highlighting of invalid settings.

      For more information, see Manage Remote Configurations.

  • Queries

    • Reduced the amount of memory used when multiple queries use the match() function with the same arguments. Before, if you ran many queries that used the same file, the contents of the file would be represented multiple times in memory, once for each query. This could put you at risk of exhausting the server's memory if the files were large. With this change the file contents will be shared between the queries and represented only once. This enables the server to run more queries and/or handle larger files.

      For more information, see Lookup Files Operations.

    • Improvements to query scheduler logic for "shelving" i.e., pausing queries considered too expensive. The pause/unpause logic are now more responsive and unpause queries faster when they become eligible to run.

    • Polling a query on /queryjobs can now delay the response a bit in order to allow returning a potentially done response. The typical effective delay is less than 2 seconds, and the positive effect is saving the extra poll roundtrip that would otherwise need to happen before the query completed. This in particular makes simple queries complete faster from the viewpoint of the client, as they do not have to wait for an extra poll roundtrip in most cases.

  • Functions

    • Performance improvements have been made to the match() query function in cases where ignoreCase=true is used together with either mode=cidr, or mode=string.

    • base64Decode() query function has been updated such that, when decoding to UTF-8, invalid code points are replaced with a placeholder character.

    • When IOCs are not available, the ioc:lookup() query function will now produce an error. Previously, it only produced a warning.

    • The memory usage of the functions selectLast() and groupBy() has been improved.

  • Other

    • When the automatic segment rebalancing feature is enabled, ignore the segment storage table when evaluating whether dead ephemeral nodes can be removed automatically.

    • Create Repositories permission now also allows LogScale Self-Hosted users to create repositories.

  • Packages

    • The size limit of packages' lookup files has been changed to adhere to the MAX_FILEUPLOAD_SIZE configuration parameter. Previously the size limit was 1MB.

      For more information, see Exporting the Package.

Fixed in this release

  • Security

    • Verified that LogScale does not use the affected Akka dependency component in CVE-2023-31442 by default, and have taken additional precautions to notify customers.

      For:

      • LogScale Cloud/Falcon Long Term Repository:

        • This CVE does not impact LogScale Cloud or LTR customers.

      • LogScale Self-Hosted:

        • Exposure to risk:

          • Potential risk is only present if a self hosted customer has modified the Akka parameters to a non default value of akka.io.dns.resolver = async-dns during initial setup.

          • By default LogScale does not use this configuration parameter.

          • CrowdStrike has never recommended custom Akka parameters. We recommend using default values for all parameters.

        • Steps to mitigate:

          • Setting akka.io.dns.resolver to default value (inet-address) will mitigate the potential risk.

        • On versions older than 1.92.0:

          • Unset the custom Akka configuration. Refer to Akka documentation for more information on how to unset or pass a different value to the parameter here.

          • CrowdStrike recommends upgrading LogScale to 1.92.x or higher versions.

  • UI Changes

    • The Search page would reload when using the browser's history navigation buttons. This issue has now been fixed.

    • An issue in the Usage page that could fail showing any data has been fixed.

      The Usage page now shows an error if there are any warnings from the query.

    • The Fields Panel flyout displayed the bottom 10 values rather than the top 10 values. This issue has now been fixed.

      For more information, see Displaying Fields.

  • Dashboards and Widgets

    • "" was being discarded when creating URLs for interactions. This issue has now been fixed.

    • Attempting to remove a widget on a dashboard would sometimes remove another widget than the one attempted to remove. This issue has been fixed.

    • The tooltip in the Time Chart widget would not show any data points. This issue has now been fixed.

    • Non-breaking space chars (ALT+Space) made Template Expressions unable to be resolved. This issue has been fixed.

    • '_' was not recognized as a valid first symbol for parameters when parsing queries. This issue has now been fixed.

    • Fixed an issue where clicking the Inspect link in alert notifications would land on a missing page.

    • The values of FixedList Parameter on a dashboard would change sort ordering after being exported to a yaml template file. This issue has been fixed.

  • Queries

    • In clusters with bucket storage running queries that take more than 90 minutes, those queries could spuriously fail with a complaint that segments were missing. The issue has now been fixed.

    • Export query result to file dialog would not close in some cases. This issue has now been fixed.

    • Restart of queries based on lookup files has been fixed: only live queries need restarting from changes to uploaded files that they depend on. Scheduled Searches and static queries use the version of the file present when they start and run to completion.

  • Functions

  • Other

    • An issue that would cause query workers to handle mini-segments for longer than intended has been fixed.

    • The following audit log issues have been fixed:

      • the audit log logged the name of the view owning the view bindings instead of the repository it links to. The name now matches the id in the binding log entry.

      • the audit log for a view update did not use the updated view but the view data before the update.

    • An uploaded file would sometimes disappear immediately after uploading. This issue has been fixed.

    • An issue that would cause bucket downloads to retry infinitely many times for certain types of segments has been fixed.

    • Fixed bucket downloads that could fail if the segment they were fetching disappeared from global.

    • In ephemeral-disk mode, allow removing a node via the UI when it is dead regardless of any data present on the node: ephemeral mode knows how to ensure durability also when nodes are lost without notice.

      For more information, see Ephemeral Nodes and Cluster Identity.

Falcon LogScale 1.88.0 LTS (2023-05-24)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.88.0LTS2023-05-24

Cloud

2024-05-31No1.44.0Yes

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.88.0/server-1.88.0.tar.gz

Bug fixes and updates.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Storage

    • It is no longer allowed for nodes to delete bucketed mini-segments involved in queries off local disks before the queries are done. This should help ensure queries do not "miss" querying these files if they are deleted while a query is running.

    • Change how downloads from bucket storage are prioritized for queries. Previously the highest priority query was allowed to download as many segments as it liked. We now try to estimate how much work a query has available in local segments, and prioritize fetching segments for those queries that are close to running out of local work and becoming blocked for that reason.

Upgrades

Changes that may occur or be required during an upgrade.

  • Other

    • Docker images have been upgraded to Java 19.0.2. to address CVE-2022-45688 issue.

    • Snakeyamls has been upgraded to 2.0 to address CVE-2022-1471 issue.

New features and improvements

  • UI Changes

  • Automation and Alerts

    • Clicking the Labels button in Alerts will now show every unique label that has been created on every alert in the same repository. This means that you don't need to rewrite a label when wanting to add the same label to another alert. This feature also applies to Scheduled Searches.

    • The error message for an Alert or Scheduled Search on their edit pages now has a button for clearing the error while the dismiss icon will just close the message but not clear errors.

    • When creating a new Alert, you now have a pulldown menu that suggests labels that you've previously created for other alerts. The same applies to Scheduled Searches.

      For more information, see Creating Alerts.

    • The default time window for Alerts has been updated:

      • When creating an alert from the Alerts page, the default query time window has been changed from 24 Hours to 1 Hours to match the default throttle time.

      • When creating an alert from the Search page, the default Throttle period has been changed to match that of the query time window set.

      For more information, see Creating Alerts.

    • When enabling an Alert or Scheduled Search with no actions, an inline warning message now appears instead of a message box.

  • GraphQL API

  • Configuration

    • New configuration parameters have been added allowing control of client.rack for our Kafka consumers:

      • KAFKA_CLIENT_RACK_ENV_VAR — this variable is read to find the name of the variable that holds the value. It defaults to ZONE, which is the same variable applied to the LogScale node zones by default.

    • Using the storage class "S3 Intelligent-Tiering" in AWS S3 selectively on files that LogScale knows continues to be supported: it is controlled by the new dynamic configuration BucketStorageUploadInfrequentThresholdDays that sets the minimum number of days of remaining retention for the data in order to switch from the default "S3 Standard" to the "Intelligent" tier.

      The decision is made at the point of upload to the bucket only, whereas existing objects in the bucket are not modified.

      The bucket must be configured to not allow the optional tiers Archive Access tier nor Deep Archive Access tier as those do not have instant access, which is required for LogScale.

      As a consequence of that, do not enable automatic archiving within the S3 Intelligent-Tiering storage class.

    • The new configuration parameter SEGMENT_READ_FADVICE has been introduced.

    • The following cluster-level setting has been introduced, editable via GraphQL mutations:

      This is also configurable via the DEFAULT_SEGMENT_REPLICATION_FACTOR configuration parameter.

      If configured via both environment variable and GraphQL mutation, the mutation has precedence.

      For new clusters the default is 1. For clusters upgrading from older versions, the initial value is taken from the STORAGE_REPLICATION_FACTOR environment variable, if set. If instead the variable is not set, the value is taken from the replication factor of the storage partition table prior to the upgrade — this means that upgrading clusters should see no change to their replication factor, unless specified in the STORAGE_REPLICATION_FACTOR .

      The feature can be disabled in case of problems via either the GraphQL mutation setAllowRebalanceExistingSegments, or the environment variable DEFAULT_ALLOW_REBALANCE_EXISTING_SEGMENTS.

      If you need to disable the feature, please reach out to Support and share your concerns so we can try to address them. We intend to remove the option to handle segment partitions manually in the future.

    • Disable the AutomaticDigesterDistribution feature by default. While the feature works, it can cause performance issues on very large installs if nodes are rebooted repeatedly. In future versions, we've worked around this issue, but for 1.88 patch versions, we prefer simply disabling the feature.

  • Dashboards and Widgets

    • When using the Edit in search view item on a dashboard widget, the values set in parameters in the query are also carried over into the search view.

    • Introduced a new setting for dashboard parameters configuration to defer query execution: the dashboard will not execute any queries on page load until the user provides a value to the parameter.

      For more information, see Configuring Dashboard Parameters.

    • The new interaction type Search Link has been introduced, allowing users to create an interaction that will trigger a new search.

      For more information, see Manage Dashboard Interactions, Creating Event List Interactions.

    • You can now save interactions with a saved query on the Search page. Interactions in saved queries are also supported in Packages.

      For more information, see Creating Event List Interactions.

    • The new interaction type Update Parameters has been introduced. This interaction allows you to update parameters in the context you're working in — on the dashboard or on the Search page.

      For more information, see Update Parameters.

    • The combo box has been updated to show multiple selections as "pills".

    • You can now delete or duplicate Event List Interactions from the Interactions overview page.

      For more information, see Deleting & Duplicating Event List Interactions.

    • Multivalued parameters have been introduced to pass an array of values to the query. The support is limited to the Dashboards page.

      For more information, see Multi-value Parameters.

    • When Setting Up a Dashboard Interaction, the {{ startTime }} and {{ endTime }} special variables now work differently, depending on whether the query, widget or dashboard is running in Live mode or not. They now work as follows:

      • In a live query or dashboard, the startTime variable will contain the relative time, such as 2d whereas endTime will be empty.

      • In a non-live query or dashboard, startTime will be the absolute start time when the query was last run. endTime, similarly, will have the end time of when the query was last run.

    • Interactive elements in visualizations now have the point cursor.

  • Log Collector

    • On the Config Overview page a column showing the state of the configuration has been added. The configuration can either be published or in draft state.

      A menu item has been added on the Config Overview page, that links to the Settings page.

      When clicking on an Error status on the Fleet Overview page, a dialog with the error details will open.

      For more information, see Falcon Log Collector Manage your Fleet.

    • Fleet Management updates:

      • Added the Basic Information page with primary information of a specific configuration e.g. name, description, no. of assigned instances.

      • The Config Editor used to create/modify LogScale Collector configurations in LogScale has been augmented with context aware auto-completion, tooltips for keywords and highlighting of invalid settings.

      For more information, see Manage Remote Configurations.

  • Queries

    • Reduced the amount of memory used when multiple queries use the match() function with the same arguments. Before, if you ran many queries that used the same file, the contents of the file would be represented multiple times in memory, once for each query. This could put you at risk of exhausting the server's memory if the files were large. With this change the file contents will be shared between the queries and represented only once. This enables the server to run more queries and/or handle larger files.

      For more information, see Lookup Files Operations.

    • Improvements to query scheduler logic for "shelving" i.e., pausing queries considered too expensive. The pause/unpause logic are now more responsive and unpause queries faster when they become eligible to run.

  • Functions

    • Performance improvements have been made to the match() query function in cases where ignoreCase=true is used together with either mode=cidr, or mode=string.

    • base64Decode() query function has been updated such that, when decoding to UTF-8, invalid code points are replaced with a placeholder character.

    • When IOCs are not available, the ioc:lookup() query function will now produce an error. Previously, it only produced a warning.

    • The memory usage of the functions selectLast() and groupBy() has been improved.

  • Other

    • When the automatic segment rebalancing feature is enabled, ignore the segment storage table when evaluating whether dead ephemeral nodes can be removed automatically.

    • Create Repositories permission now also allows LogScale Self-Hosted users to create repositories.

  • Packages

    • The size limit of packages' lookup files has been changed to adhere to the MAX_FILEUPLOAD_SIZE configuration parameter. Previously the size limit was 1MB.

      For more information, see Exporting the Package.

Fixed in this release

  • UI Changes

    • The Search page would reload when using the browser's history navigation buttons. This issue has now been fixed.

    • An issue in the Usage page that could fail showing any data has been fixed.

      The Usage page now shows an error if there are any warnings from the query.

    • The Fields Panel flyout displayed the bottom 10 values rather than the top 10 values. This issue has now been fixed.

      For more information, see Displaying Fields.

  • Dashboards and Widgets

    • "" was being discarded when creating URLs for interactions. This issue has now been fixed.

    • Attempting to remove a widget on a dashboard would sometimes remove another widget than the one attempted to remove. This issue has been fixed.

    • The tooltip in the Time Chart widget would not show any data points. This issue has now been fixed.

    • Non-breaking space chars (ALT+Space) made Template Expressions unable to be resolved. This issue has been fixed.

    • '_' was not recognized as a valid first symbol for parameters when parsing queries. This issue has now been fixed.

    • Fixed an issue where clicking the Inspect link in alert notifications would land on a missing page.

    • The values of FixedList Parameter on a dashboard would change sort ordering after being exported to a yaml template file. This issue has been fixed.

  • Queries

    • In clusters with bucket storage running queries that take more than 90 minutes, those queries could spuriously fail with a complaint that segments were missing. The issue has now been fixed.

    • Export query result to file dialog would not close in some cases. This issue has now been fixed.

    • Restart of queries based on lookup files has been fixed: only live queries need restarting from changes to uploaded files that they depend on. Scheduled Searches and static queries use the version of the file present when they start and run to completion.

  • Functions

  • Other

    • An issue that would cause query workers to handle mini-segments for longer than intended has been fixed.

    • The following audit log issues have been fixed:

      • the audit log logged the name of the view owning the view bindings instead of the repository it links to. The name now matches the id in the binding log entry.

      • the audit log for a view update did not use the updated view but the view data before the update.

    • An uploaded file would sometimes disappear immediately after uploading. This issue has been fixed.

    • An issue that would cause bucket downloads to retry infinitely many times for certain types of segments has been fixed.

    • Fixed bucket downloads that could fail if the segment they were fetching disappeared from global.

    • In ephemeral-disk mode, allow removing a node via the UI when it is dead regardless of any data present on the node: ephemeral mode knows how to ensure durability also when nodes are lost without notice.

      For more information, see Ephemeral Nodes and Cluster Identity.

Falcon LogScale 1.87.0 GA (2023-04-25)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.87.0GA2023-04-25

Cloud

2024-05-31No1.44.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.

New features and improvements

  • Dashboards and Widgets

    • When using the Edit in search view item on a dashboard widget, the values set in parameters in the query are also carried over into the search view.

    • When Setting Up a Dashboard Interaction, the {{ startTime }} and {{ endTime }} special variables now work differently, depending on whether the query, widget or dashboard is running in Live mode or not. They now work as follows:

      • In a live query or dashboard, the startTime variable will contain the relative time, such as 2d whereas endTime will be empty.

      • In a non-live query or dashboard, startTime will be the absolute start time when the query was last run. endTime, similarly, will have the end time of when the query was last run.

  • Functions

    • base64Decode() query function has been updated such that, when decoding to UTF-8, invalid code points are replaced with a placeholder character.

    • The memory usage of the functions selectLast() and groupBy() has been improved.

  • Packages

    • The size limit of packages' lookup files has been changed to adhere to the MAX_FILEUPLOAD_SIZE configuration parameter. Previously the size limit was 1MB.

      For more information, see Exporting the Package.

Fixed in this release

  • UI Changes

    • An issue in the Usage page that could fail showing any data has been fixed.

      The Usage page now shows an error if there are any warnings from the query.

  • Dashboards and Widgets

    • Attempting to remove a widget on a dashboard would sometimes remove another widget than the one attempted to remove. This issue has been fixed.

    • Non-breaking space chars (ALT+Space) made Template Expressions unable to be resolved. This issue has been fixed.

  • Queries

    • In clusters with bucket storage running queries that take more than 90 minutes, those queries could spuriously fail with a complaint that segments were missing. The issue has now been fixed.

  • Functions

    • The groupBy() function would not always warn upon exceeding the default limit. This issue has now been fixed.

    • timeChart() provided with unitand groupBy() as the aggregation function would not warn on exceeding the default groupBy() limit. This issue has now been fixed.

Falcon LogScale 1.86.0 GA (2023-04-18)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.86.0GA2023-04-18

Cloud

2024-05-31No1.44.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.

New features and improvements

  • Automation and Alerts

    • When creating a new Alert, you now have a pulldown menu that suggests labels that you've previously created for other alerts. The same applies to Scheduled Searches.

      For more information, see Creating Alerts.

  • Configuration

    • New configuration parameters have been added allowing control of client.rack for our Kafka consumers:

      • KAFKA_CLIENT_RACK_ENV_VAR — this variable is read to find the name of the variable that holds the value. It defaults to ZONE, which is the same variable applied to the LogScale node zones by default.

Fixed in this release

  • Dashboards and Widgets

    • "" was being discarded when creating URLs for interactions. This issue has now been fixed.

    • '_' was not recognized as a valid first symbol for parameters when parsing queries. This issue has now been fixed.

Falcon LogScale 1.85.0 GA (2023-04-13)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.85.0GA2023-04-13

Cloud

2024-05-31No1.44.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.

Upgrades

Changes that may occur or be required during an upgrade.

  • Other

    • Snakeyamls has been upgraded to 2.0 to address CVE-2022-1471 issue.

New features and improvements

  • UI Changes

    • Improvements in UI tables visualization: even long column headers' text is now always left-aligned (instead of center-aligned and on top of each other) and uses a different color.

    • Organization level query blocking has been added to Organization Settings UI.

      For more information, see Organization Query Monitor.

  • Automation and Alerts

    • Clicking the Labels button in Alerts will now show every unique label that has been created on every alert in the same repository. This means that you don't need to rewrite a label when wanting to add the same label to another alert. This feature also applies to Scheduled Searches.

  • GraphQL API

  • Configuration

    • Using the storage class "S3 Intelligent-Tiering" in AWS S3 selectively on files that LogScale knows continues to be supported: it is controlled by the new dynamic configuration BucketStorageUploadInfrequentThresholdDays that sets the minimum number of days of remaining retention for the data in order to switch from the default "S3 Standard" to the "Intelligent" tier.

      The decision is made at the point of upload to the bucket only, whereas existing objects in the bucket are not modified.

      The bucket must be configured to not allow the optional tiers Archive Access tier nor Deep Archive Access tier as those do not have instant access, which is required for LogScale.

      As a consequence of that, do not enable automatic archiving within the S3 Intelligent-Tiering storage class.

    • The new configuration parameter SEGMENT_READ_FADVICE has been introduced.

  • Dashboards and Widgets

    • Introduced a new setting for dashboard parameters configuration to defer query execution: the dashboard will not execute any queries on page load until the user provides a value to the parameter.

      For more information, see Configuring Dashboard Parameters.

    • The new interaction type Search Link has been introduced, allowing users to create an interaction that will trigger a new search.

      For more information, see Manage Dashboard Interactions, Creating Event List Interactions.

    • Multivalued parameters have been introduced to pass an array of values to the query. The support is limited to the Dashboards page.

      For more information, see Multi-value Parameters.

  • Log Collector

    • Fleet Management updates:

      • Added the Basic Information page with primary information of a specific configuration e.g. name, description, no. of assigned instances.

      • The Config Editor used to create/modify LogScale Collector configurations in LogScale has been augmented with context aware auto-completion, tooltips for keywords and highlighting of invalid settings.

      For more information, see Manage Remote Configurations.

  • Queries

    • Improvements to query scheduler logic for "shelving" i.e., pausing queries considered too expensive. The pause/unpause logic are now more responsive and unpause queries faster when they become eligible to run.

  • Functions

    • When IOCs are not available, the ioc:lookup() query function will now produce an error. Previously, it only produced a warning.

  • Other

    • Create Repositories permission now also allows LogScale Self-Hosted users to create repositories.

    • Worker-level query scheduling has been adjusted to avoid long-term starvation of expensive queries.

Fixed in this release

  • Functions

  • Other

    • An issue that would cause query workers to handle mini-segments for longer than intended has been fixed.

    • The following audit log issues have been fixed:

      • the audit log logged the name of the view owning the view bindings instead of the repository it links to. The name now matches the id in the binding log entry.

      • the audit log for a view update did not use the updated view but the view data before the update.

    • An issue that would cause bucket downloads to retry infinitely many times for certain types of segments has been fixed.

Falcon LogScale 1.84.0 Not Released (2023-04-04)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.84.0Not Released2023-04-04

Internal Only

2024-04-30No1.44.0No

Available for download two days after release.

Not released.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.

Falcon LogScale 1.83.0 GA (2023-03-28)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.83.0GA2023-03-28

Cloud

2024-05-31No1.44.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • Support for running on Java 11, 12, 13, 14, 15 and 16 will be removed by the end of September 2023.

New features and improvements

  • UI Changes

  • Automation and Alerts

    • The default time window for Alerts has been updated:

      • When creating an alert from the Alerts page, the default query time window has been changed from 24 Hours to 1 Hours to match the default throttle time.

      • When creating an alert from the Search page, the default Throttle period has been changed to match that of the query time window set.

      For more information, see Creating Alerts.

  • GraphQL API

    • The querySearchDomain GraphQL query now allows you to search for Views and Repositories based on your permissions — previously, enforcing specific permissions caused errors.

  • Dashboards and Widgets

  • Functions

Fixed in this release

  • Other

    • Fixed bucket downloads that could fail if the segment they were fetching disappeared from global.

Falcon LogScale 1.82.4 LTS (2023-11-20)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.82.4LTS2023-11-20

Cloud

2024-04-30No1.44.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.82.4/server-1.82.4.tar.gz

These notes include entries from the following previous releases: 1.82.0, 1.82.1, 1.82.2, 1.82.3

Bug fix and updates.

New features and improvements

  • UI Changes

    • Improvements have been made on the Fields Panel, that would flicker when switching between the Results and Events tabs and the query was live. It now displays the fields of the aggregated query when on the Results tab, and the fields of the events query when on the Events tab.

  • Queries

    • Added backend support for organization level query blocking. Actors with the BlockQueries permission are able to block and stop queries running within their organization.

  • Functions

    • The match()query function has been improved in terms of speed when using glob as the mode.

  • Other

    • Added optional global argument to stopAllQueries, stopStreamingQueries, stopHistoricalQueries, blockedQueries, addToBlocklistById, addToBlocklist permissions. Default is false i.e. within own organization only.

    • Worker-level query scheduling has been adjusted to avoid long-term starvation of expensive queries.

Fixed in this release

  • Security

    • Verified that LogScale does not use the affected Akka dependency component in CVE-2023-31442 by default, and have taken additional precautions to notify customers.

      For:

      • LogScale Cloud/Falcon Long Term Repository:

        • This CVE does not impact LogScale Cloud or LTR customers.

      • LogScale Self-Hosted:

        • Exposure to risk:

          • Potential risk is only present if a self hosted customer has modified the Akka parameters to a non default value of akka.io.dns.resolver = async-dns during initial setup.

          • By default LogScale does not use this configuration parameter.

          • CrowdStrike has never recommended custom Akka parameters. We recommend using default values for all parameters.

        • Steps to mitigate:

          • Setting akka.io.dns.resolver to default value (inet-address) will mitigate the potential risk.

        • On versions older than 1.92.0:

          • Unset the custom Akka configuration. Refer to Akka documentation for more information on how to unset or pass a different value to the parameter here.

          • CrowdStrike recommends upgrading LogScale to 1.92.x or higher versions.

  • UI Changes

    • Time Selector and date picker in the Time Interval panel have been fixed for issues related to daylight savings time.

    • Fixed some missing Field Interactions options for the JSON data type in the Event List.

      For more information, see Field Data Types.

  • API

    • Fixed an issue with API Explorer that could fail to load in some configurations when using cookie authentication.

  • Dashboards and Widgets

    • The dropdown menu for dashboard parameter suggestions is now faster and can handle several thousand entries without blocking the UI.

      For more information, see Manage Dashboard Parameters.

  • Functions

  • Other

    • Fixed a permission issue for LogScale Self-Hosted having a dependency on the ManageOrganizations system permission, which should not apply to that environment — the ManageCluster system permission in itself is now sufficient for Self-Hosted.

    • Fixed an issue where searching within small subsets of the latest 24 hours in combination with hash filters could result in events that belonged in the time range to not be included in the result. The visible symptom was that narrowing the search span provided more hits.

    • Fixed an issue that occurred when creating users: when multiple user creation requests were sent at the same time, multiple users were in some cases created with the same name.

    • Fixed an issue that could cause recently merged mini-segments to be excluded from searches after a reboot.

Falcon LogScale 1.82.3 LTS (2023-07-04)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.82.3LTS2023-07-04

Cloud

2024-04-30No1.44.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.82.3/server-1.82.3.tar.gz

These notes include entries from the following previous releases: 1.82.0, 1.82.1, 1.82.2

Bug fix and updates.

New features and improvements

  • UI Changes

    • Improvements have been made on the Fields Panel, that would flicker when switching between the Results and Events tabs and the query was live. It now displays the fields of the aggregated query when on the Results tab, and the fields of the events query when on the Events tab.

  • Queries

    • Added backend support for organization level query blocking. Actors with the BlockQueries permission are able to block and stop queries running within their organization.

  • Functions

    • The match()query function has been improved in terms of speed when using glob as the mode.

  • Other

    • Added optional global argument to stopAllQueries, stopStreamingQueries, stopHistoricalQueries, blockedQueries, addToBlocklistById, addToBlocklist permissions. Default is false i.e. within own organization only.

    • Worker-level query scheduling has been adjusted to avoid long-term starvation of expensive queries.

Fixed in this release

  • Security

    • Verified that LogScale does not use the affected Akka dependency component in CVE-2023-31442 by default, and have taken additional precautions to notify customers.

      For:

      • LogScale Cloud/Falcon Long Term Repository:

        • This CVE does not impact LogScale Cloud or LTR customers.

      • LogScale Self-Hosted:

        • Exposure to risk:

          • Potential risk is only present if a self hosted customer has modified the Akka parameters to a non default value of akka.io.dns.resolver = async-dns during initial setup.

          • By default LogScale does not use this configuration parameter.

          • CrowdStrike has never recommended custom Akka parameters. We recommend using default values for all parameters.

        • Steps to mitigate:

          • Setting akka.io.dns.resolver to default value (inet-address) will mitigate the potential risk.

        • On versions older than 1.92.0:

          • Unset the custom Akka configuration. Refer to Akka documentation for more information on how to unset or pass a different value to the parameter here.

          • CrowdStrike recommends upgrading LogScale to 1.92.x or higher versions.

  • UI Changes

  • API

    • Fixed an issue with API Explorer that could fail to load in some configurations when using cookie authentication.

  • Dashboards and Widgets

    • The dropdown menu for dashboard parameter suggestions is now faster and can handle several thousand entries without blocking the UI.

      For more information, see Manage Dashboard Parameters.

  • Functions

  • Other

    • Fixed a permission issue for LogScale Self-Hosted having a dependency on the ManageOrganizations system permission, which should not apply to that environment — the ManageCluster system permission in itself is now sufficient for Self-Hosted.

    • Fixed an issue where searching within small subsets of the latest 24 hours in combination with hash filters could result in events that belonged in the time range to not be included in the result. The visible symptom was that narrowing the search span provided more hits.

    • Fixed an issue that occurred when creating users: when multiple user creation requests were sent at the same time, multiple users were in some cases created with the same name.

    • Fixed an issue that could cause recently merged mini-segments to be excluded from searches after a reboot.

Falcon LogScale 1.82.2 LTS (2023-06-22)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.82.2LTS2023-06-22

Cloud

2024-04-30No1.44.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.82.2/server-1.82.2.tar.gz

These notes include entries from the following previous releases: 1.82.0, 1.82.1

Security fixes.

New features and improvements

  • UI Changes

    • Improvements have been made on the Fields Panel, that would flicker when switching between the Results and Events tabs and the query was live. It now displays the fields of the aggregated query when on the Results tab, and the fields of the events query when on the Events tab.

  • Queries

    • Added backend support for organization level query blocking. Actors with the BlockQueries permission are able to block and stop queries running within their organization.

  • Functions

    • The match()query function has been improved in terms of speed when using glob as the mode.

  • Other

    • Added optional global argument to stopAllQueries, stopStreamingQueries, stopHistoricalQueries, blockedQueries, addToBlocklistById, addToBlocklist permissions. Default is false i.e. within own organization only.

    • Worker-level query scheduling has been adjusted to avoid long-term starvation of expensive queries.

Fixed in this release

  • Security

    • Verified that LogScale does not use the affected Akka dependency component in CVE-2023-31442 by default, and have taken additional precautions to notify customers.

      For:

      • LogScale Cloud/Falcon Long Term Repository:

        • This CVE does not impact LogScale Cloud or LTR customers.

      • LogScale Self-Hosted:

        • Exposure to risk:

          • Potential risk is only present if a self hosted customer has modified the Akka parameters to a non default value of akka.io.dns.resolver = async-dns during initial setup.

          • By default LogScale does not use this configuration parameter.

          • CrowdStrike has never recommended custom Akka parameters. We recommend using default values for all parameters.

        • Steps to mitigate:

          • Setting akka.io.dns.resolver to default value (inet-address) will mitigate the potential risk.

        • On versions older than 1.92.0:

          • Unset the custom Akka configuration. Refer to Akka documentation for more information on how to unset or pass a different value to the parameter here.

          • CrowdStrike recommends upgrading LogScale to 1.92.x or higher versions.

  • UI Changes

  • API

    • Fixed an issue with API Explorer that could fail to load in some configurations when using cookie authentication.

  • Dashboards and Widgets

    • The dropdown menu for dashboard parameter suggestions is now faster and can handle several thousand entries without blocking the UI.

      For more information, see Manage Dashboard Parameters.

  • Functions

  • Other

    • Fixed a permission issue for LogScale Self-Hosted having a dependency on the ManageOrganizations system permission, which should not apply to that environment — the ManageCluster system permission in itself is now sufficient for Self-Hosted.

    • Fixed an issue that occurred when creating users: when multiple user creation requests were sent at the same time, multiple users were in some cases created with the same name.

    • Fixed an issue that could cause recently merged mini-segments to be excluded from searches after a reboot.

Falcon LogScale 1.82.1 LTS (2023-05-15)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.82.1LTS2023-05-15

Cloud

2024-04-30No1.44.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.82.1/server-1.82.1.tar.gz

These notes include entries from the following previous releases: 1.82.0

Bug fixes and updates.

New features and improvements

  • UI Changes

    • Improvements have been made on the Fields Panel, that would flicker when switching between the Results and Events tabs and the query was live. It now displays the fields of the aggregated query when on the Results tab, and the fields of the events query when on the Events tab.

  • Queries

    • Added backend support for organization level query blocking. Actors with the BlockQueries permission are able to block and stop queries running within their organization.

  • Functions

    • The match()query function has been improved in terms of speed when using glob as the mode.

  • Other

    • Added optional global argument to stopAllQueries, stopStreamingQueries, stopHistoricalQueries, blockedQueries, addToBlocklistById, addToBlocklist permissions. Default is false i.e. within own organization only.

    • Worker-level query scheduling has been adjusted to avoid long-term starvation of expensive queries.

Fixed in this release

  • UI Changes

  • API

    • Fixed an issue with API Explorer that could fail to load in some configurations when using cookie authentication.

  • Dashboards and Widgets

    • The dropdown menu for dashboard parameter suggestions is now faster and can handle several thousand entries without blocking the UI.

      For more information, see Manage Dashboard Parameters.

  • Functions

  • Other

    • Fixed a permission issue for LogScale Self-Hosted having a dependency on the ManageOrganizations system permission, which should not apply to that environment — the ManageCluster system permission in itself is now sufficient for Self-Hosted.

    • Fixed an issue that occurred when creating users: when multiple user creation requests were sent at the same time, multiple users were in some cases created with the same name.

    • Fixed an issue that could cause recently merged mini-segments to be excluded from searches after a reboot.

Falcon LogScale 1.82.0 LTS (2023-04-12)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.82.0LTS2023-04-12

Cloud

2024-04-30No1.44.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.82.0/server-1.82.0.tar.gz

Bug fixes and updates.

New features and improvements

  • UI Changes

    • Improvements have been made on the Fields Panel, that would flicker when switching between the Results and Events tabs and the query was live. It now displays the fields of the aggregated query when on the Results tab, and the fields of the events query when on the Events tab.

  • Queries

    • Added backend support for organization level query blocking. Actors with the BlockQueries permission are able to block and stop queries running within their organization.

  • Functions

    • The match()query function has been improved in terms of speed when using glob as the mode.

  • Other

    • Added optional global argument to stopAllQueries, stopStreamingQueries, stopHistoricalQueries, blockedQueries, addToBlocklistById, addToBlocklist permissions. Default is false i.e. within own organization only.

    • Worker-level query scheduling has been adjusted to avoid long-term starvation of expensive queries.

Fixed in this release

  • API

    • Fixed an issue with API Explorer that could fail to load in some configurations when using cookie authentication.

  • Dashboards and Widgets

    • The dropdown menu for dashboard parameter suggestions is now faster and can handle several thousand entries without blocking the UI.

      For more information, see Manage Dashboard Parameters.

  • Functions

  • Other

    • Fixed a permission issue for LogScale Self-Hosted having a dependency on the ManageOrganizations system permission, which should not apply to that environment — the ManageCluster system permission in itself is now sufficient for Self-Hosted.

    • Fixed an issue that occurred when creating users: when multiple user creation requests were sent at the same time, multiple users were in some cases created with the same name.

    • Fixed an issue that could cause recently merged mini-segments to be excluded from searches after a reboot.

Falcon LogScale 1.81.0 GA (2023-03-14)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.81.0GA2023-03-14

Cloud

2024-04-30No1.44.0No

Available for download two days after release.

Bug fixes and updates.

Removed

Items that have been removed as of this release.

Automation and Alerts

  • The deprecated REST Alert API has been removed.

Other

  • The deprecated REST Action API endpoint for testing actions has been removed.

Upgrades

Changes that may occur or be required during an upgrade.

  • Other

    • OpenSSL in Docker images has been upgraded to address CVE-2023-0286 issue.

New features and improvements

  • UI Changes

    • Query Monitor page is now available on the organization level. Users with Monitor queries organization level permission get access to the page where they can see queries running in their organization.

      For more information, see Query Monitor, Organization Query Monitor.

  • Automation and Alerts

    • The throttle field on alerts can now be imported and exported.

  • Configuration

  • Ingestion

    • New ingest endpoint api/v1/ingest/json for ingesting JSON objects and JSON arrays has been added.

      For more information, see Ingesting Raw JSON Data.

  • Other

    • Event redaction will no longer rewrite mini-segments. Instead, the redaction will be delayed until all mini-segments that would be affected have been merged.

Fixed in this release

  • Falcon Data Replicator

    • Fixed a bug where testing new FDR feeds that use S3 Aliasing would fail for valid credentials.

  • Dashboards and Widgets

    • The following items have been fixed:

      • Parameter bindings would not be visible for imported dashboards when configuring interactions.

      • Imported dashboard containing interactions would be perceived as invalid.

      For more information, see Manage Dashboard Interactions.

  • Functions

    • Fixed a bug where the query editor would wrongly claim that predicate functions used as match guards were missing an argument to the field parameter.

  • Other

    • Fixed some issues in the event redaction implementation which could cause the redaction to fail in rare cases.

    • Fixed an issue which could cause mini-segments to not all be on the same host for a short time, while those mini-segments were being merged. This could cause queries to be unable to query them.

    • A bug has been fixed that caused recent mini-segments to be missed in queries if the mini-segments were merged during the query.

Falcon LogScale 1.80.0 GA (2023-03-07)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.80.0GA2023-03-07

Cloud

2024-04-30No1.44.0No

Available for download two days after release.

Bug fixes and updates.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Ingestion

    • Ingested events would not be limited in size if the bulk of the data in the event was in fields other than @rawstring. This will now be enforced. Events that exceed the limit on event size at ingest are handled as follows:

      • @rawstring is truncated to the maximum allowed length, and all other fields are dropped from the event.

      • @timestamp becomes the ingest time.

      • @timezone becomes UTC.

      (This is identical to the previous handling of oversized @rawstring).

Upgrades

Changes that may occur or be required during an upgrade.

  • Other

    • Kafka client has been upgraded to 3.4.0.

      Kafka broker has been upgraded to 3.4.0 in the Kafka container.

      The container upgrade is performed for security reasons to resolve CVE-2022-36944 issue, which Kafka should however not be affected by. If you wish to do a rolling upgrade of your Kafka containers, please always refer to Kafka upgrade guide.

New features and improvements

  • UI Changes

    • Whether one can create a new repository is now controlled by the Create repository permission in the UI.

  • Configuration

    • Removed NEW_VHOST_SELECTION_ENABLED as a configuration option. The option has been true by default since 1.70; an opt-out is no longer needed.

  • Dashboards and Widgets

    • Changed the query editor when editing dashboard queries to be the same that is used on the Search page.

  • Log Collector

    • New Template feature added to the Fleet Management page, which allows you to:

      • upload a yaml file when creating a new configuration

      • export either the published or draft version of a configuration file.

      For more information, see Fleet Management Overview.

  • Queries

    • Added backend support for organization level query monitor. The new MonitorQueries permission now allows viewing queries that are running within the organization.

  • Functions

  • Packages

    • Interactions installed from a package use the new repository where the package is installed.

Fixed in this release

  • UI Changes

    • A high CPU usage in the UI since LogScale 1.75 when the Time Zone Selector dropdown was displayed has now been fixed.

  • Configuration

    • Automatic generation and updating of the digest partitions table has been enabled, and manual editing is no longer supported. See Digest Rules for reference.

      The table will be kept up to date based on the following node-level settings (see Starting a New LogScale Node):

      • ZONE defines a node's zone. The table we generate will attempt to distribute segments across as many zones as possible.

      • Nodes will appear in the table more often if they have many cores. Nodes with fewer cores will appear less often.

      • Nodes with a NODE_ROLES setting that excludes digest work will not appear in the table.

      A cluster-level setting has also been introduced: setDigestReplicationFactor GraphQL mutation configures the replication factor to use for the table. This is also settable via the environment variable DEFAULT_DIGEST_REPLICATION_FACTOR.

      Automatic management of the digest partition table is now handled by the environment variable DEFAULT_ALLOW_UPDATE_DESIRED_DIGESTERS. We intend to remove the option to handle digest partitions manually in the future.

  • Dashboards and Widgets

    • Keyboard combinations cmd+Z/Ctrl+Z no longer deletes the query on dashboard widgets.

  • Functions

    • A performance issue in collect() when it collected many values has been fixed.

    • Validation of join() and join-like functions in conditional expressions and subqueries not having positional information has been fixed.

    • Fixed an issue where joins in case statements, match statements, and subqueries would mark the entire query as erroneous.

  • Other

    • Some mini-segments would be excluded from queries in cases where those mini-segments had previously been merged, but the merge was reverted.

    • Two hosts booted at around the same time would conflict on which vhost number to use, causing one of the hosts to crash.

    • Avoid caching warnings that some data segments could not be found on any servers. This prevents queries from displaying this warning spuriously.

    • Mini-segments would be removed too early from nodes which were querying them, causing queries to be missing some data.

Falcon LogScale 1.79.0 GA (2023-02-28)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.79.0GA2023-02-28

Cloud

2024-04-30No1.44.0No

Available for download two days after release.

Bug fixes and updates.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Configuration

    • The behavior of nodes using the ingestonly role has changed. Such nodes used not to write to global, and not register themselves in the cluster. They now do both.

      The old behavior can be restored by setting NEW_INGEST_ONLY_NODE_SEMANTICS=false. If you do this, please reach out to Support and outline your need, as this option will be removed in the near future.

New features and improvements

  • Automation and Alerts

    • When creating or editing Alerts and Scheduled Searches, it is now possible to specify another user the alert or scheduled search should run as, via the new organization permission ChangeTriggersToRunAsOtherUsers.

      It is now checked that the user selected to run the alert or scheduled search has permissions to run it. Previously, that was first checked when trying to run the alert or scheduled search.

      The new feature checks whether the user, trying to create or edit an alert or schedule search, has permissions to change and run as another user. If the feature is enabled, you can select the user to run an alert or schedule search as, from a list of users.

      See Creating Alerts and Scheduled Search Run on Behalf of for more information.

  • Functions

    • Memory consumption of the format() function has been decreased.

    • Introduced a memory limit in collect() mapper phase. The collect() function now collects up to the value of the limit argument or 10 MiB worth of distinct values, whichever comes first.

Fixed in this release

  • Falcon Data Replicator

  • UI Changes

    • The Event Distribution Histogram wouldn't show properly after manipulation of the @timestamp field.

  • Dashboards and Widgets

    • Fixed dashboard links to the same dashboard, as they would not correctly update the parameters.

    • In visualizations using the timeChart() or bucket() functions, when no results were returned you would just see an empty page. Consistently with other visualizations, you will now see a no-result message displayed, such as No results in active time window or Search Completed. No results found — depending on whether Live mode is selected or not.

Falcon LogScale 1.78.0 GA (2023-02-21)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.78.0GA2023-02-21

Cloud

2024-04-30No1.44.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Configuration

    • Starting from 1.78 release, the default value for the MAX_INGEST_REQUEST_SIZE configuration will be reduced from 1 GB to 32 MB.

      This value limits the size of ingest request and rejects oversized requests.

      If the request is compressed within HTTP, then this restricts the size after decompressing.

New features and improvements

  • UI Changes

    • An explicit logout message now indicates that the user's session has been terminated.

    • The Time Zone Selector now shows the timezone as +00:00 instead of -00:00 when the offset is zero.

    • Clone items have all been replaced with Duplicate in the UI to be consistent with what they actually do.

  • Automation and Alerts

    • When updating or creating new Actions, any server errors will be displayed in a summary under the form. The server errors in the summary will now specify the form field title where the error occurred, to easily identify where the error is.

    • Removed the sidepanel when creating/editing an Alerts or Scheduled Searches.

  • Configuration

    • The default value of MAX_INGEST_REQUEST_SIZE has been reduced from 1024 MB to 32 MB. This limits the size of ingest requests and rejects oversized requests. If the request is compressed within HTTP, then this restricts the size after decompressing.

  • Functions

    • The array:filter() function is now generally available.

    • Introduced the new query function bitfield:extractFlags().

    • More time format modifiers are now supported in the format() function:

      • Full and abbreviated month, day-of-week names, and the century

      • Date/time composition format Day Mon DD HH:MM:SS Zone YYYY, e.g., Tue Jun 22 16:45:05 GMT+1 1993.

  • Other

    • "Sticky" autoshards no longer mean that the system cannot tune their value, but only that it cannot decrease the number of shards; the cluster is allowed to raise the number of shards on datasources when it needs to, also for those that were set as sticky using the REST API.

    • An enhancement has been made so that when the number of ingest partitions is increased, fresh partitions are assigned to all non-idle datasources based on the new set of partitions. Before this change only new datasources (new tag combinations) would be using the new partitions. The auto-balancing does not start if there are nodes in the cluster running versions prior to 1.78.0.

Fixed in this release

  • UI Changes

    • When exporting a dashboard, alert or scheduled search as a template, the labels' field was missing in the exported YAML.

      For more information, see Managing Alerts, Scheduled Searches, Dashboards & Widgets.

    • Double-clicking in the Event List would open the Inspection Panel instead of making a text selection. It now correctly selects the word being double-clicked.

  • Automation and Alerts

    • A typo has been fixed in message ActionWithIdNotFound.

  • GraphQL API

    • Pending deletes that would cause nodes to fail to start, reporting a NullPointerException, have been fixed.

  • Dashboards and Widgets

    • A newly added, unconfigured dashboard parameter could not be deleted again. This issue has been fixed.

  • Queries

    • When making updates to the query partition table, only change partitions with dead nodes. This should allow queries to continue without requiring resubmit when a previously unknown node joins the cluster.

    • Ensure we keep hosts listed in the query partition table up to date as those hosts restart. This should prevent an issue where removing too many nodes from a cluster could prevent queries from running.

    • Prevent nodes configured not to run queries from starting queries locally in the case where the query request can't be proxied.

  • Other

    • Fixed ingest-only nodes that would fail all requests to /dataspaces and /repositories.

Falcon LogScale 1.77.0 GA (2023-02-14)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.77.0GA2023-02-14

Cloud

2024-04-30No1.44.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Configuration

    • Starting from 1.78 release, the default value for the MAX_INGEST_REQUEST_SIZE configuration will be reduced from 1 GB to 32 MB.

      This value limits the size of ingest request and rejects oversized requests.

      If the request is compressed within HTTP, then this restricts the size after decompressing.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Ingestion

    • It is no longer possible to list ingest tokens for system repositories.

New features and improvements

  • UI Changes

    • Filtering and group-by icons have been added to the Fields Panel and Inspection Panel detail views.

  • Documentation

  • Dashboards and Widgets

    • Hold ⇧ (Shift) to show unformatted values. Hold ⌥ (Alt on Windows or Option on Mac) to show full legend labels.

    • startTime, endTime, and parameter variables are now also available when working with Template Language expressions on the Search page.

  • Functions

  • Other

    • Ephemeral nodes are automatically removed from the cluster if they are offline for too long (2 hours by default).

    • Adding more Repositories & Views to a group is now done inside a dialog.

  • Packages

    • Repository interactions are now supported in Packages. When exporting a package with dashboard link interactions referencing a dashboard also included in the package, then that reference will be updated to reflect this in the resulting zip file.

Fixed in this release

  • Storage

    • Fixing mini-segment fetches as they failed to complete properly during queries, if the number of mini-segments involved was too large.

    • Job-to-node assignment in LogScale has been reworked. Jobs that only needed to run on a subset of nodes in the cluster — such as the job for firing alert notifications or the job enforcing retention settings — would previously select which hosts were responsible for executing the job based on the segment storage table.

      The selection is now based on consistent hashing, which means the job assignments should automatically follow the set of live nodes.

      It is possible to observe where a given job is running based on logs found with the query class=*JobAssignments*.

  • Configuration

    • Nodes are now considered ephemeral only if they set USING_EPHEMERAL_DISKS to true. Previously, they were ephemeral if they either set that configuration, or if they were using the httponly node role.

  • Dashboards and Widgets

    • When importing a dashboard from a template, some widget options (including LegendPosition) were being ignored and reverted to their default value.

    • The Table widget is able to display any search result, yet in the widget dropdown, it would often say "Incompatible". It now indicates compatibility. For event type results, the Event List visualisation will still be preferred and auto selected.

    • When using the Export as template functionality, the label field was missing in the exported YAML.

      For more information, see Dashboards & Widgets.

    • If you clone a widget and click Edit in Search View, you would be asked to discard your changes before editing, causing confusion. Now, Edit in Search View is not available until you save or discard using the buttons in the top bar.

      For more information, see Manage Widgets, Manage Widgets.

    • The Scatter Chart widget visualization would under some conditions claim to be compatible with any result that has 3 or more fields. Yet it would not display anything unless the actual data was numeric. The Scatter Chart visualization now properly detects compatibility and ignores any non-numeric fields in the query result.

  • Functions

    • The collect() function has been fixed in that its limit parameter was not being obeyed. This would lead to inconsistent results when there were more values to collect than what specified in the limit.

  • Other

    • Fixing mini-segment downloads during queries, as they could cause download retries to fail spuriously, even if the download actually succeeded.

    • Linked to the correct SaaS eula for SaaS customers.

    • Timeout from publish to global topic in Kafka has been fixed, as it resulted in marking input segments for merge as broken temporarily.

Falcon LogScale 1.76.5 LTS (2023-07-04)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.76.5LTS2023-07-04

Cloud

2024-02-28No1.44.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.76.5/server-1.76.5.tar.gz

These notes include entries from the following previous releases: 1.76.1, 1.76.2, 1.76.3, 1.76.4

Bug fix and updates.

Advance Warning

The following items are due to change in a future release.

  • Configuration

    • Starting from 1.78 release, the default value for the MAX_INGEST_REQUEST_SIZE configuration will be reduced from 1 GB to 32 MB.

      This value limits the size of ingest request and rejects oversized requests.

      If the request is compressed within HTTP, then this restricts the size after decompressing.

Removed

Items that have been removed as of this release.

API

  • Removed the API for managing ingest tokens. This has long been deprecated and replaced by a GraphQL API.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The REST endpoint for testing actions has been deprecated. api/v1/repositories/repoId/alertnotifiers/actionId/test has been deprecated. The new GraphQL mutations should be used instead.

Upgrades

Changes that may occur or be required during an upgrade.

  • Other

    • Java upgraded to 17.0.6 in Docker containers

      Kafka upgraded to 3.3.2 for KAFKA-14379

      Kafka client upgraded to 3.3.2

      Kafka Docker container upgraded to 3.3.2

    • Kafka client has been upgraded to 3.4.0.

      Kafka broker has been upgraded to 3.4.0 in the Kafka container.

      The container upgrade is performed for security reasons to resolve CVE-2022-36944 issue, which Kafka should however not be affected by. If you wish to do a rolling upgrade of your Kafka containers, please always refer to Kafka upgrade guide.

  • Packages

    • Optimizations in package handling require migration of data during upgrade. This migration is performed automatically. Please notice:

      • While the upgrade of cluster nodes are ongoing, we recommend you do not install or update any packages, as they may end up in an inconsistent state.

        If a package ends up in a bad state during migration, it can be fixed simply by reinstalling the package.

      • You will potentially experience that accessing the list of installed packages will fail, and creating new dashboards, alerts, parsers, etc. based on package templates will not work as intended.

        This should only happen during the cluster upgrade, and should resolve itself once the cluster is fully upgraded.

      • If the cluster nodes are downgraded, any packages installed or updated while running the new version will not work, and we therefore recommend uninstalling or downgrading those packages prior to downgrading the cluster nodes.

New features and improvements

  • Security

    • When creating a new group you now have to add the group and add permissions for it in the same multi step dialog.

  • UI Changes

    • Changes have been made for the three-dot menu (⋮) used for Field Interactions:

      • It is now available from the Fields Panel and the Inspection Panel, see Searching Data.

      • Keyboard navigation has been improved.

      • For field interactions with live queries, the Fields Panel flyout will now display a fixed list of top values, keeping the values from the point in time when the menu was opened.

    • Suggestions in Query Editor will show for certain function parameters like time formats.

    • Introduced Search Interactions to add custom event list options for all users in a repository.

      For more information, see Event List Interactions.

    • Event List Interactions are now sorted by name and repository name by default.

    • Tabs on the Users page are renamed: former Groups and Permissions tab is now renamed to Permissions; former Details tab is now renamed to Information. In addition, the Permissions tab is now displayed first — it is also the tab that will be opened by default when navigating to a user from other places in the product. See Manage users & permissions for a description of roles and permissions in the UI.

    • The Search page now supports timezone picking e.g. +02:00 Copenhagen. The timezone will be set on the users' session and remembered between pages.

      For more information, see Setting Time Zone.

    • You can now set your preferred timezone under Manage your Account.

    • Known field names are now shown as completion suggestions in Query Editor while you type.

  • Automation and Alerts

  • GraphQL API

    • GraphQL API mutations have been added for testing actions without having to save them first. The added mutations are:

      • testEmailAction

      • testHumioRepoAction

      • testOpsGenieAction

      • testPagerDutyAction

      • testSlackAction

      • testSlackPostMessageAction

      • testUploadFileAction

      • testVictorOpsAction

      • testWebhookAction

      The previous testAction mutation has been removed.

      The new GraphQL API mutations' signature is almost the same as the create mutation for the same action, except that test actions require event data and a trigger name, as the previous testAction mutation did.

      As a consequence, the Test button is now always enabled in the UI.

  • Configuration

    • The ability to keep the same merge target across digest changes is reintroduced. This feature was reverted in an earlier release due to a discovered issue where mini segments for an active merge target could end up spread across hosts. As that issue has been fixed, mini segments should now be stored on the hosts running digest for the target.

    • A new environment configuration variable GLOB_ALLOW_LIST_EMAIL_ACTIONS is introduced. It enables cluster-wide blocking of recipients of Action Type: Email actions that are not in the provided allow list.

    • New dynamic configuration FlushSegmentsAndGlobalOnShutdown. When set, and when USING_EPHEMERAL_DISKS is set to true, forces all in-progress segments to be closed and uploaded to bucket, and also forces a write (and upload) of global snapshot during shutdown. When not set, this avoids the extra work and thus time shutting down from flushing very recent segments, as those can then be resumed on next boot, assuming that next boot continues on the same Kafka epoch. The default is false, which allows faster shutdown.

  • Dashboards and Widgets

    • The Single Value widget now supports interactions on both the Search and Dashboard page. See Manage Dashboard Interactions for more details on interactions.

    • Introduced Dashboards Interactions to add interactive elements to your dashboards.

      For more information, see Manage Dashboard Interactions.

    • It is now possible to set a temporary timezone in dashboards, which will be read from the URL on page load e.g. tz=Europe/Copenhagen.

      For more information, see Time Interval Settings.

  • Log Collector

  • Functions

  • Other

    • "Sticky" autoshards no longer mean that the system cannot tune their value, but only that it cannot decrease the number of shards; the cluster is allowed to raise the number of shards on datasources when it needs to, also for those that were set as sticky using the REST API.

    • Ephemeral nodes are automatically removed from the cluster if they are offline for too long (2 hours by default).

Fixed in this release

  • Security

    • Verified that LogScale does not use the affected Akka dependency component in CVE-2023-31442 by default, and have taken additional precautions to notify customers.

      For:

      • LogScale Cloud/Falcon Long Term Repository:

        • This CVE does not impact LogScale Cloud or LTR customers.

      • LogScale Self-Hosted:

        • Exposure to risk:

          • Potential risk is only present if a self hosted customer has modified the Akka parameters to a non default value of akka.io.dns.resolver = async-dns during initial setup.

          • By default LogScale does not use this configuration parameter.

          • CrowdStrike has never recommended custom Akka parameters. We recommend using default values for all parameters.

        • Steps to mitigate:

          • Setting akka.io.dns.resolver to default value (inet-address) will mitigate the potential risk.

        • On versions older than 1.92.0:

          • Unset the custom Akka configuration. Refer to Akka documentation for more information on how to unset or pass a different value to the parameter here.

          • CrowdStrike recommends upgrading LogScale to 1.92.x or higher versions.

  • UI Changes

    • Time Selector and date picker in the Time Interval panel have been fixed for issues related to daylight savings time.

    • Fixed an issue that made switching UI theme report an error and only take effect for the current session.

    • Fixed an issue where the dashboard page would freeze when the value of a dashboard parameter was changed.

    • Fixed the UI as it were not showing an error when a query gets blocked due to query quota settings.

    • We have fixed tooltips in the query editor, which were hidden by other elements in the UI.

  • Automation and Alerts

    • For self-hosted: Automation for sending emails from Actions no longer uses the IP filter, allowing administrators not to put Automation on the IP allowlist.

  • GraphQL API

    • Pending deletes that would cause nodes to fail to start, reporting a NullPointerException, have been fixed.

  • Storage

    • Fixing mini-segment fetches as they failed to complete properly during queries, if the number of mini-segments involved was too large.

    • The noise from MiniSegmentMergeLatencyLoggerJob has been reduced by being more conservative about when we log mini segments that are unexpectedly not being merged. We have made MiniSegmentMergeLatencyLoggerJob take datasource idleness into account.

  • API

    • Fixed an issue with API Explorer that could fail to load in some configurations when using cookie authentication.

  • Configuration

    • Nodes are now considered ephemeral only if they set USING_EPHEMERAL_DISKS to true. Previously, they were ephemeral if they either set that configuration, or if they were using the httponly node role.

    • Fixed an issue where the IOC database could get out of sync. The IOC database will be re-downloaded upon upgrade, therefore IOCs won't be completely available for a while after the upgrade.

    • Removed compression type extreme for configuration COMPRESSION_TYPE. Specifying extreme will now select the default value of high in order not to cause configuration errors for clusters that specify extreme. The suggestion is to remove COMPRESSION_TYPE from your configurations unless you specify the only other non-default value of fast.

  • Ingestion

    • We have set a maximum number of events that we will parse under a single timeout so large batches are allowed to take longer. If you've seen parsers time out not because the parser is actually slow but because you were processing many events in a single batch, this change should cause that stop happening. Only parsers that are genuinely slow should now time out.

  • Queries

    • The query scheduling has been fixed as it could hit races with the background recompression of files in a way that resulted in the query missing the file and ending up adding warnings about segment files being missed by the query.

    • Fixed a failing require from MiniSegmentsAsTargetSegmentReader, causing queries to fail in very rare cases.

  • Functions

    • Queries ending with tail() will no longer be rendered with infinite scroll.

  • Other

    • Fixed an issue for the ingest API that made it possible to ingest into system repositories.

    • Fixing mini-segment downloads during queries, as they could cause download retries to fail spuriously, even if the download actually succeeded.

    • Fixed an issue where searching within small subsets of the latest 24 hours in combination with hash filters could result in events that belonged in the time range to not be included in the result. The visible symptom was that narrowing the search span provided more hits.

    • Timeout from publish to global topic in Kafka has been fixed, as it resulted in marking input segments for merge as broken temporarily.

Falcon LogScale 1.76.4 LTS (2023-06-22)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.76.4LTS2023-06-22

Cloud

2024-02-28No1.44.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.76.4/server-1.76.4.tar.gz

These notes include entries from the following previous releases: 1.76.1, 1.76.2, 1.76.3

Security fixes.

Advance Warning

The following items are due to change in a future release.

  • Configuration

    • Starting from 1.78 release, the default value for the MAX_INGEST_REQUEST_SIZE configuration will be reduced from 1 GB to 32 MB.

      This value limits the size of ingest request and rejects oversized requests.

      If the request is compressed within HTTP, then this restricts the size after decompressing.

Removed

Items that have been removed as of this release.

API

  • Removed the API for managing ingest tokens. This has long been deprecated and replaced by a GraphQL API.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The REST endpoint for testing actions has been deprecated. api/v1/repositories/repoId/alertnotifiers/actionId/test has been deprecated. The new GraphQL mutations should be used instead.

Upgrades

Changes that may occur or be required during an upgrade.

  • Other

    • Java upgraded to 17.0.6 in Docker containers

      Kafka upgraded to 3.3.2 for KAFKA-14379

      Kafka client upgraded to 3.3.2

      Kafka Docker container upgraded to 3.3.2

    • Kafka client has been upgraded to 3.4.0.

      Kafka broker has been upgraded to 3.4.0 in the Kafka container.

      The container upgrade is performed for security reasons to resolve CVE-2022-36944 issue, which Kafka should however not be affected by. If you wish to do a rolling upgrade of your Kafka containers, please always refer to Kafka upgrade guide.

  • Packages

    • Optimizations in package handling require migration of data during upgrade. This migration is performed automatically. Please notice:

      • While the upgrade of cluster nodes are ongoing, we recommend you do not install or update any packages, as they may end up in an inconsistent state.

        If a package ends up in a bad state during migration, it can be fixed simply by reinstalling the package.

      • You will potentially experience that accessing the list of installed packages will fail, and creating new dashboards, alerts, parsers, etc. based on package templates will not work as intended.

        This should only happen during the cluster upgrade, and should resolve itself once the cluster is fully upgraded.

      • If the cluster nodes are downgraded, any packages installed or updated while running the new version will not work, and we therefore recommend uninstalling or downgrading those packages prior to downgrading the cluster nodes.

New features and improvements

  • Security

    • When creating a new group you now have to add the group and add permissions for it in the same multi step dialog.

  • UI Changes

    • Changes have been made for the three-dot menu (⋮) used for Field Interactions:

      • It is now available from the Fields Panel and the Inspection Panel, see Searching Data.

      • Keyboard navigation has been improved.

      • For field interactions with live queries, the Fields Panel flyout will now display a fixed list of top values, keeping the values from the point in time when the menu was opened.

    • Suggestions in Query Editor will show for certain function parameters like time formats.

    • Introduced Search Interactions to add custom event list options for all users in a repository.

      For more information, see Event List Interactions.

    • Event List Interactions are now sorted by name and repository name by default.

    • Tabs on the Users page are renamed: former Groups and Permissions tab is now renamed to Permissions; former Details tab is now renamed to Information. In addition, the Permissions tab is now displayed first — it is also the tab that will be opened by default when navigating to a user from other places in the product. See Manage users & permissions for a description of roles and permissions in the UI.

    • The Search page now supports timezone picking e.g. +02:00 Copenhagen. The timezone will be set on the users' session and remembered between pages.

      For more information, see Setting Time Zone.

    • You can now set your preferred timezone under Manage your Account.

    • Known field names are now shown as completion suggestions in Query Editor while you type.

  • Automation and Alerts

  • GraphQL API

    • GraphQL API mutations have been added for testing actions without having to save them first. The added mutations are:

      • testEmailAction

      • testHumioRepoAction

      • testOpsGenieAction

      • testPagerDutyAction

      • testSlackAction

      • testSlackPostMessageAction

      • testUploadFileAction

      • testVictorOpsAction

      • testWebhookAction

      The previous testAction mutation has been removed.

      The new GraphQL API mutations' signature is almost the same as the create mutation for the same action, except that test actions require event data and a trigger name, as the previous testAction mutation did.

      As a consequence, the Test button is now always enabled in the UI.

  • Configuration

    • The ability to keep the same merge target across digest changes is reintroduced. This feature was reverted in an earlier release due to a discovered issue where mini segments for an active merge target could end up spread across hosts. As that issue has been fixed, mini segments should now be stored on the hosts running digest for the target.

    • A new environment configuration variable GLOB_ALLOW_LIST_EMAIL_ACTIONS is introduced. It enables cluster-wide blocking of recipients of Action Type: Email actions that are not in the provided allow list.

    • New dynamic configuration FlushSegmentsAndGlobalOnShutdown. When set, and when USING_EPHEMERAL_DISKS is set to true, forces all in-progress segments to be closed and uploaded to bucket, and also forces a write (and upload) of global snapshot during shutdown. When not set, this avoids the extra work and thus time shutting down from flushing very recent segments, as those can then be resumed on next boot, assuming that next boot continues on the same Kafka epoch. The default is false, which allows faster shutdown.

  • Dashboards and Widgets

    • The Single Value widget now supports interactions on both the Search and Dashboard page. See Manage Dashboard Interactions for more details on interactions.

    • Introduced Dashboards Interactions to add interactive elements to your dashboards.

      For more information, see Manage Dashboard Interactions.

    • It is now possible to set a temporary timezone in dashboards, which will be read from the URL on page load e.g. tz=Europe/Copenhagen.

      For more information, see Time Interval Settings.

  • Log Collector

  • Functions

  • Other

    • "Sticky" autoshards no longer mean that the system cannot tune their value, but only that it cannot decrease the number of shards; the cluster is allowed to raise the number of shards on datasources when it needs to, also for those that were set as sticky using the REST API.

    • Ephemeral nodes are automatically removed from the cluster if they are offline for too long (2 hours by default).

Fixed in this release

  • Security

    • Verified that LogScale does not use the affected Akka dependency component in CVE-2023-31442 by default, and have taken additional precautions to notify customers.

      For:

      • LogScale Cloud/Falcon Long Term Repository:

        • This CVE does not impact LogScale Cloud or LTR customers.

      • LogScale Self-Hosted:

        • Exposure to risk:

          • Potential risk is only present if a self hosted customer has modified the Akka parameters to a non default value of akka.io.dns.resolver = async-dns during initial setup.

          • By default LogScale does not use this configuration parameter.

          • CrowdStrike has never recommended custom Akka parameters. We recommend using default values for all parameters.

        • Steps to mitigate:

          • Setting akka.io.dns.resolver to default value (inet-address) will mitigate the potential risk.

        • On versions older than 1.92.0:

          • Unset the custom Akka configuration. Refer to Akka documentation for more information on how to unset or pass a different value to the parameter here.

          • CrowdStrike recommends upgrading LogScale to 1.92.x or higher versions.

  • UI Changes

    • Fixed an issue that made switching UI theme report an error and only take effect for the current session.

    • Fixed an issue where the dashboard page would freeze when the value of a dashboard parameter was changed.

    • Fixed the UI as it were not showing an error when a query gets blocked due to query quota settings.

    • We have fixed tooltips in the query editor, which were hidden by other elements in the UI.

  • Automation and Alerts

    • For self-hosted: Automation for sending emails from Actions no longer uses the IP filter, allowing administrators not to put Automation on the IP allowlist.

  • GraphQL API

    • Pending deletes that would cause nodes to fail to start, reporting a NullPointerException, have been fixed.

  • Storage

    • Fixing mini-segment fetches as they failed to complete properly during queries, if the number of mini-segments involved was too large.

    • The noise from MiniSegmentMergeLatencyLoggerJob has been reduced by being more conservative about when we log mini segments that are unexpectedly not being merged. We have made MiniSegmentMergeLatencyLoggerJob take datasource idleness into account.

  • API

    • Fixed an issue with API Explorer that could fail to load in some configurations when using cookie authentication.

  • Configuration

    • Nodes are now considered ephemeral only if they set USING_EPHEMERAL_DISKS to true. Previously, they were ephemeral if they either set that configuration, or if they were using the httponly node role.

    • Fixed an issue where the IOC database could get out of sync. The IOC database will be re-downloaded upon upgrade, therefore IOCs won't be completely available for a while after the upgrade.

    • Removed compression type extreme for configuration COMPRESSION_TYPE. Specifying extreme will now select the default value of high in order not to cause configuration errors for clusters that specify extreme. The suggestion is to remove COMPRESSION_TYPE from your configurations unless you specify the only other non-default value of fast.

  • Ingestion

    • We have set a maximum number of events that we will parse under a single timeout so large batches are allowed to take longer. If you've seen parsers time out not because the parser is actually slow but because you were processing many events in a single batch, this change should cause that stop happening. Only parsers that are genuinely slow should now time out.

  • Queries

    • The query scheduling has been fixed as it could hit races with the background recompression of files in a way that resulted in the query missing the file and ending up adding warnings about segment files being missed by the query.

    • Fixed a failing require from MiniSegmentsAsTargetSegmentReader, causing queries to fail in very rare cases.

  • Functions

    • Queries ending with tail() will no longer be rendered with infinite scroll.

  • Other

    • Fixed an issue for the ingest API that made it possible to ingest into system repositories.

    • Fixing mini-segment downloads during queries, as they could cause download retries to fail spuriously, even if the download actually succeeded.

    • Timeout from publish to global topic in Kafka has been fixed, as it resulted in marking input segments for merge as broken temporarily.

Falcon LogScale 1.76.3 LTS (2023-04-27)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.76.3LTS2023-04-27

Cloud

2024-02-28No1.44.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.76.3/server-1.76.3.tar.gz

These notes include entries from the following previous releases: 1.76.1, 1.76.2

Bug fix.

Advance Warning

The following items are due to change in a future release.

  • Configuration

    • Starting from 1.78 release, the default value for the MAX_INGEST_REQUEST_SIZE configuration will be reduced from 1 GB to 32 MB.

      This value limits the size of ingest request and rejects oversized requests.

      If the request is compressed within HTTP, then this restricts the size after decompressing.

Removed

Items that have been removed as of this release.

API

  • Removed the API for managing ingest tokens. This has long been deprecated and replaced by a GraphQL API.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The REST endpoint for testing actions has been deprecated. api/v1/repositories/repoId/alertnotifiers/actionId/test has been deprecated. The new GraphQL mutations should be used instead.

Upgrades

Changes that may occur or be required during an upgrade.

  • Other

    • Java upgraded to 17.0.6 in Docker containers

      Kafka upgraded to 3.3.2 for KAFKA-14379

      Kafka client upgraded to 3.3.2

      Kafka Docker container upgraded to 3.3.2

    • Kafka client has been upgraded to 3.4.0.

      Kafka broker has been upgraded to 3.4.0 in the Kafka container.

      The container upgrade is performed for security reasons to resolve CVE-2022-36944 issue, which Kafka should however not be affected by. If you wish to do a rolling upgrade of your Kafka containers, please always refer to Kafka upgrade guide.

  • Packages

    • Optimizations in package handling require migration of data during upgrade. This migration is performed automatically. Please notice:

      • While the upgrade of cluster nodes are ongoing, we recommend you do not install or update any packages, as they may end up in an inconsistent state.

        If a package ends up in a bad state during migration, it can be fixed simply by reinstalling the package.

      • You will potentially experience that accessing the list of installed packages will fail, and creating new dashboards, alerts, parsers, etc. based on package templates will not work as intended.

        This should only happen during the cluster upgrade, and should resolve itself once the cluster is fully upgraded.

      • If the cluster nodes are downgraded, any packages installed or updated while running the new version will not work, and we therefore recommend uninstalling or downgrading those packages prior to downgrading the cluster nodes.

New features and improvements

  • Security

    • When creating a new group you now have to add the group and add permissions for it in the same multi step dialog.

  • UI Changes

    • Changes have been made for the three-dot menu (⋮) used for Field Interactions:

      • It is now available from the Fields Panel and the Inspection Panel, see Searching Data.

      • Keyboard navigation has been improved.

      • For field interactions with live queries, the Fields Panel flyout will now display a fixed list of top values, keeping the values from the point in time when the menu was opened.

    • Suggestions in Query Editor will show for certain function parameters like time formats.

    • Introduced Search Interactions to add custom event list options for all users in a repository.

      For more information, see Event List Interactions.

    • Event List Interactions are now sorted by name and repository name by default.

    • Tabs on the Users page are renamed: former Groups and Permissions tab is now renamed to Permissions; former Details tab is now renamed to Information. In addition, the Permissions tab is now displayed first — it is also the tab that will be opened by default when navigating to a user from other places in the product. See Manage users & permissions for a description of roles and permissions in the UI.

    • The Search page now supports timezone picking e.g. +02:00 Copenhagen. The timezone will be set on the users' session and remembered between pages.

      For more information, see Setting Time Zone.

    • You can now set your preferred timezone under Manage your Account.

    • Known field names are now shown as completion suggestions in Query Editor while you type.

  • Automation and Alerts

  • GraphQL API

    • GraphQL API mutations have been added for testing actions without having to save them first. The added mutations are:

      • testEmailAction

      • testHumioRepoAction

      • testOpsGenieAction

      • testPagerDutyAction

      • testSlackAction

      • testSlackPostMessageAction

      • testUploadFileAction

      • testVictorOpsAction

      • testWebhookAction

      The previous testAction mutation has been removed.

      The new GraphQL API mutations' signature is almost the same as the create mutation for the same action, except that test actions require event data and a trigger name, as the previous testAction mutation did.

      As a consequence, the Test button is now always enabled in the UI.

  • Configuration

    • The ability to keep the same merge target across digest changes is reintroduced. This feature was reverted in an earlier release due to a discovered issue where mini segments for an active merge target could end up spread across hosts. As that issue has been fixed, mini segments should now be stored on the hosts running digest for the target.

    • A new environment configuration variable GLOB_ALLOW_LIST_EMAIL_ACTIONS is introduced. It enables cluster-wide blocking of recipients of Action Type: Email actions that are not in the provided allow list.

    • New dynamic configuration FlushSegmentsAndGlobalOnShutdown. When set, and when USING_EPHEMERAL_DISKS is set to true, forces all in-progress segments to be closed and uploaded to bucket, and also forces a write (and upload) of global snapshot during shutdown. When not set, this avoids the extra work and thus time shutting down from flushing very recent segments, as those can then be resumed on next boot, assuming that next boot continues on the same Kafka epoch. The default is false, which allows faster shutdown.

  • Dashboards and Widgets

    • The Single Value widget now supports interactions on both the Search and Dashboard page. See Manage Dashboard Interactions for more details on interactions.

    • Introduced Dashboards Interactions to add interactive elements to your dashboards.

      For more information, see Manage Dashboard Interactions.

    • It is now possible to set a temporary timezone in dashboards, which will be read from the URL on page load e.g. tz=Europe/Copenhagen.

      For more information, see Time Interval Settings.

  • Log Collector

  • Functions

  • Other

    • "Sticky" autoshards no longer mean that the system cannot tune their value, but only that it cannot decrease the number of shards; the cluster is allowed to raise the number of shards on datasources when it needs to, also for those that were set as sticky using the REST API.

    • Ephemeral nodes are automatically removed from the cluster if they are offline for too long (2 hours by default).

Fixed in this release

  • UI Changes

    • Fixed an issue that made switching UI theme report an error and only take effect for the current session.

    • Fixed an issue where the dashboard page would freeze when the value of a dashboard parameter was changed.

    • Fixed the UI as it were not showing an error when a query gets blocked due to query quota settings.

    • We have fixed tooltips in the query editor, which were hidden by other elements in the UI.

  • Automation and Alerts

    • For self-hosted: Automation for sending emails from Actions no longer uses the IP filter, allowing administrators not to put Automation on the IP allowlist.

  • GraphQL API

    • Pending deletes that would cause nodes to fail to start, reporting a NullPointerException, have been fixed.

  • Storage

    • Fixing mini-segment fetches as they failed to complete properly during queries, if the number of mini-segments involved was too large.

    • The noise from MiniSegmentMergeLatencyLoggerJob has been reduced by being more conservative about when we log mini segments that are unexpectedly not being merged. We have made MiniSegmentMergeLatencyLoggerJob take datasource idleness into account.

  • API

    • Fixed an issue with API Explorer that could fail to load in some configurations when using cookie authentication.

  • Configuration

    • Nodes are now considered ephemeral only if they set USING_EPHEMERAL_DISKS to true. Previously, they were ephemeral if they either set that configuration, or if they were using the httponly node role.

    • Fixed an issue where the IOC database could get out of sync. The IOC database will be re-downloaded upon upgrade, therefore IOCs won't be completely available for a while after the upgrade.

    • Removed compression type extreme for configuration COMPRESSION_TYPE. Specifying extreme will now select the default value of high in order not to cause configuration errors for clusters that specify extreme. The suggestion is to remove COMPRESSION_TYPE from your configurations unless you specify the only other non-default value of fast.

  • Ingestion

    • We have set a maximum number of events that we will parse under a single timeout so large batches are allowed to take longer. If you've seen parsers time out not because the parser is actually slow but because you were processing many events in a single batch, this change should cause that stop happening. Only parsers that are genuinely slow should now time out.

  • Queries

    • The query scheduling has been fixed as it could hit races with the background recompression of files in a way that resulted in the query missing the file and ending up adding warnings about segment files being missed by the query.

    • Fixed a failing require from MiniSegmentsAsTargetSegmentReader, causing queries to fail in very rare cases.

  • Functions

    • Queries ending with tail() will no longer be rendered with infinite scroll.

  • Other

    • Fixed an issue for the ingest API that made it possible to ingest into system repositories.

    • Fixing mini-segment downloads during queries, as they could cause download retries to fail spuriously, even if the download actually succeeded.

    • Timeout from publish to global topic in Kafka has been fixed, as it resulted in marking input segments for merge as broken temporarily.

Falcon LogScale 1.76.2 LTS (2023-03-06)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.76.2LTS2023-03-06

Cloud

2024-02-28No1.44.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.76.2/server-1.76.2.tar.gz

These notes include entries from the following previous releases: 1.76.1

Security fix.

Advance Warning

The following items are due to change in a future release.

  • Configuration

    • Starting from 1.78 release, the default value for the MAX_INGEST_REQUEST_SIZE configuration will be reduced from 1 GB to 32 MB.

      This value limits the size of ingest request and rejects oversized requests.

      If the request is compressed within HTTP, then this restricts the size after decompressing.

Removed

Items that have been removed as of this release.

API

  • Removed the API for managing ingest tokens. This has long been deprecated and replaced by a GraphQL API.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The REST endpoint for testing actions has been deprecated. api/v1/repositories/repoId/alertnotifiers/actionId/test has been deprecated. The new GraphQL mutations should be used instead.

Upgrades

Changes that may occur or be required during an upgrade.

  • Other

    • Java upgraded to 17.0.6 in Docker containers

      Kafka upgraded to 3.3.2 for KAFKA-14379

      Kafka client upgraded to 3.3.2

      Kafka Docker container upgraded to 3.3.2

    • Kafka client has been upgraded to 3.4.0.

      Kafka broker has been upgraded to 3.4.0 in the Kafka container.

      The container upgrade is performed for security reasons to resolve CVE-2022-36944 issue, which Kafka should however not be affected by. If you wish to do a rolling upgrade of your Kafka containers, please always refer to Kafka upgrade guide.

  • Packages

    • Optimizations in package handling require migration of data during upgrade. This migration is performed automatically. Please notice:

      • While the upgrade of cluster nodes are ongoing, we recommend you do not install or update any packages, as they may end up in an inconsistent state.

        If a package ends up in a bad state during migration, it can be fixed simply by reinstalling the package.

      • You will potentially experience that accessing the list of installed packages will fail, and creating new dashboards, alerts, parsers, etc. based on package templates will not work as intended.

        This should only happen during the cluster upgrade, and should resolve itself once the cluster is fully upgraded.

      • If the cluster nodes are downgraded, any packages installed or updated while running the new version will not work, and we therefore recommend uninstalling or downgrading those packages prior to downgrading the cluster nodes.

New features and improvements

  • Security

    • When creating a new group you now have to add the group and add permissions for it in the same multi step dialog.

  • UI Changes

    • Changes have been made for the three-dot menu (⋮) used for Field Interactions:

      • It is now available from the Fields Panel and the Inspection Panel, see Searching Data.

      • Keyboard navigation has been improved.

      • For field interactions with live queries, the Fields Panel flyout will now display a fixed list of top values, keeping the values from the point in time when the menu was opened.

    • Suggestions in Query Editor will show for certain function parameters like time formats.

    • Introduced Search Interactions to add custom event list options for all users in a repository.

      For more information, see Event List Interactions.

    • Event List Interactions are now sorted by name and repository name by default.

    • Tabs on the Users page are renamed: former Groups and Permissions tab is now renamed to Permissions; former Details tab is now renamed to Information. In addition, the Permissions tab is now displayed first — it is also the tab that will be opened by default when navigating to a user from other places in the product. See Manage users & permissions for a description of roles and permissions in the UI.

    • The Search page now supports timezone picking e.g. +02:00 Copenhagen. The timezone will be set on the users' session and remembered between pages.

      For more information, see Setting Time Zone.

    • You can now set your preferred timezone under Manage your Account.

    • Known field names are now shown as completion suggestions in Query Editor while you type.

  • Automation and Alerts

  • GraphQL API

    • GraphQL API mutations have been added for testing actions without having to save them first. The added mutations are:

      • testEmailAction

      • testHumioRepoAction

      • testOpsGenieAction

      • testPagerDutyAction

      • testSlackAction

      • testSlackPostMessageAction

      • testUploadFileAction

      • testVictorOpsAction

      • testWebhookAction

      The previous testAction mutation has been removed.

      The new GraphQL API mutations' signature is almost the same as the create mutation for the same action, except that test actions require event data and a trigger name, as the previous testAction mutation did.

      As a consequence, the Test button is now always enabled in the UI.

  • Configuration

    • The ability to keep the same merge target across digest changes is reintroduced. This feature was reverted in an earlier release due to a discovered issue where mini segments for an active merge target could end up spread across hosts. As that issue has been fixed, mini segments should now be stored on the hosts running digest for the target.

    • A new environment configuration variable GLOB_ALLOW_LIST_EMAIL_ACTIONS is introduced. It enables cluster-wide blocking of recipients of Action Type: Email actions that are not in the provided allow list.

    • New dynamic configuration FlushSegmentsAndGlobalOnShutdown. When set, and when USING_EPHEMERAL_DISKS is set to true, forces all in-progress segments to be closed and uploaded to bucket, and also forces a write (and upload) of global snapshot during shutdown. When not set, this avoids the extra work and thus time shutting down from flushing very recent segments, as those can then be resumed on next boot, assuming that next boot continues on the same Kafka epoch. The default is false, which allows faster shutdown.

  • Dashboards and Widgets

    • The Single Value widget now supports interactions on both the Search and Dashboard page. See Manage Dashboard Interactions for more details on interactions.

    • Introduced Dashboards Interactions to add interactive elements to your dashboards.

      For more information, see Manage Dashboard Interactions.

    • It is now possible to set a temporary timezone in dashboards, which will be read from the URL on page load e.g. tz=Europe/Copenhagen.

      For more information, see Time Interval Settings.

  • Log Collector

  • Functions

  • Other

    • "Sticky" autoshards no longer mean that the system cannot tune their value, but only that it cannot decrease the number of shards; the cluster is allowed to raise the number of shards on datasources when it needs to, also for those that were set as sticky using the REST API.

    • Ephemeral nodes are automatically removed from the cluster if they are offline for too long (2 hours by default).

Fixed in this release

  • UI Changes

    • Fixed an issue that made switching UI theme report an error and only take effect for the current session.

    • Fixed an issue where the dashboard page would freeze when the value of a dashboard parameter was changed.

    • Fixed the UI as it were not showing an error when a query gets blocked due to query quota settings.

    • We have fixed tooltips in the query editor, which were hidden by other elements in the UI.

  • Automation and Alerts

    • For self-hosted: Automation for sending emails from Actions no longer uses the IP filter, allowing administrators not to put Automation on the IP allowlist.

  • GraphQL API

    • Pending deletes that would cause nodes to fail to start, reporting a NullPointerException, have been fixed.

  • Storage

    • Fixing mini-segment fetches as they failed to complete properly during queries, if the number of mini-segments involved was too large.

    • The noise from MiniSegmentMergeLatencyLoggerJob has been reduced by being more conservative about when we log mini segments that are unexpectedly not being merged. We have made MiniSegmentMergeLatencyLoggerJob take datasource idleness into account.

  • Configuration

    • Nodes are now considered ephemeral only if they set USING_EPHEMERAL_DISKS to true. Previously, they were ephemeral if they either set that configuration, or if they were using the httponly node role.

    • Fixed an issue where the IOC database could get out of sync. The IOC database will be re-downloaded upon upgrade, therefore IOCs won't be completely available for a while after the upgrade.

    • Removed compression type extreme for configuration COMPRESSION_TYPE. Specifying extreme will now select the default value of high in order not to cause configuration errors for clusters that specify extreme. The suggestion is to remove COMPRESSION_TYPE from your configurations unless you specify the only other non-default value of fast.

  • Ingestion

    • We have set a maximum number of events that we will parse under a single timeout so large batches are allowed to take longer. If you've seen parsers time out not because the parser is actually slow but because you were processing many events in a single batch, this change should cause that stop happening. Only parsers that are genuinely slow should now time out.

  • Queries

    • The query scheduling has been fixed as it could hit races with the background recompression of files in a way that resulted in the query missing the file and ending up adding warnings about segment files being missed by the query.

    • Fixed a failing require from MiniSegmentsAsTargetSegmentReader, causing queries to fail in very rare cases.

  • Functions

    • Queries ending with tail() will no longer be rendered with infinite scroll.

  • Other

    • Fixed an issue for the ingest API that made it possible to ingest into system repositories.

    • Fixing mini-segment downloads during queries, as they could cause download retries to fail spuriously, even if the download actually succeeded.

    • Timeout from publish to global topic in Kafka has been fixed, as it resulted in marking input segments for merge as broken temporarily.

Falcon LogScale 1.76.1 LTS (2023-02-27)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.76.1LTS2023-02-27

Cloud

2024-02-28No1.44.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.76.1/server-1.76.1.tar.gz

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Configuration

    • Starting from 1.78 release, the default value for the MAX_INGEST_REQUEST_SIZE configuration will be reduced from 1 GB to 32 MB.

      This value limits the size of ingest request and rejects oversized requests.

      If the request is compressed within HTTP, then this restricts the size after decompressing.

Removed

Items that have been removed as of this release.

API

  • Removed the API for managing ingest tokens. This has long been deprecated and replaced by a GraphQL API.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The REST endpoint for testing actions has been deprecated. api/v1/repositories/repoId/alertnotifiers/actionId/test has been deprecated. The new GraphQL mutations should be used instead.

Upgrades

Changes that may occur or be required during an upgrade.

  • Other

    • Java upgraded to 17.0.6 in Docker containers

      Kafka upgraded to 3.3.2 for KAFKA-14379

      Kafka client upgraded to 3.3.2

      Kafka Docker container upgraded to 3.3.2

  • Packages

    • Optimizations in package handling require migration of data during upgrade. This migration is performed automatically. Please notice:

      • While the upgrade of cluster nodes are ongoing, we recommend you do not install or update any packages, as they may end up in an inconsistent state.

        If a package ends up in a bad state during migration, it can be fixed simply by reinstalling the package.

      • You will potentially experience that accessing the list of installed packages will fail, and creating new dashboards, alerts, parsers, etc. based on package templates will not work as intended.

        This should only happen during the cluster upgrade, and should resolve itself once the cluster is fully upgraded.

      • If the cluster nodes are downgraded, any packages installed or updated while running the new version will not work, and we therefore recommend uninstalling or downgrading those packages prior to downgrading the cluster nodes.

New features and improvements

  • Security

    • When creating a new group you now have to add the group and add permissions for it in the same multi step dialog.

  • UI Changes

    • Changes have been made for the three-dot menu (⋮) used for Field Interactions:

      • It is now available from the Fields Panel and the Inspection Panel, see Searching Data.

      • Keyboard navigation has been improved.

      • For field interactions with live queries, the Fields Panel flyout will now display a fixed list of top values, keeping the values from the point in time when the menu was opened.

    • Suggestions in Query Editor will show for certain function parameters like time formats.

    • Introduced Search Interactions to add custom event list options for all users in a repository.

      For more information, see Event List Interactions.

    • Event List Interactions are now sorted by name and repository name by default.

    • Tabs on the Users page are renamed: former Groups and Permissions tab is now renamed to Permissions; former Details tab is now renamed to Information. In addition, the Permissions tab is now displayed first — it is also the tab that will be opened by default when navigating to a user from other places in the product. See Manage users & permissions for a description of roles and permissions in the UI.

    • The Search page now supports timezone picking e.g. +02:00 Copenhagen. The timezone will be set on the users' session and remembered between pages.

      For more information, see Setting Time Zone.

    • You can now set your preferred timezone under Manage your Account.

    • Known field names are now shown as completion suggestions in Query Editor while you type.

  • Automation and Alerts

  • GraphQL API

    • GraphQL API mutations have been added for testing actions without having to save them first. The added mutations are:

      • testEmailAction

      • testHumioRepoAction

      • testOpsGenieAction

      • testPagerDutyAction

      • testSlackAction

      • testSlackPostMessageAction

      • testUploadFileAction

      • testVictorOpsAction

      • testWebhookAction

      The previous testAction mutation has been removed.

      The new GraphQL API mutations' signature is almost the same as the create mutation for the same action, except that test actions require event data and a trigger name, as the previous testAction mutation did.

      As a consequence, the Test button is now always enabled in the UI.

  • Configuration

    • The ability to keep the same merge target across digest changes is reintroduced. This feature was reverted in an earlier release due to a discovered issue where mini segments for an active merge target could end up spread across hosts. As that issue has been fixed, mini segments should now be stored on the hosts running digest for the target.

    • A new environment configuration variable GLOB_ALLOW_LIST_EMAIL_ACTIONS is introduced. It enables cluster-wide blocking of recipients of Action Type: Email actions that are not in the provided allow list.

    • New dynamic configuration FlushSegmentsAndGlobalOnShutdown. When set, and when USING_EPHEMERAL_DISKS is set to true, forces all in-progress segments to be closed and uploaded to bucket, and also forces a write (and upload) of global snapshot during shutdown. When not set, this avoids the extra work and thus time shutting down from flushing very recent segments, as those can then be resumed on next boot, assuming that next boot continues on the same Kafka epoch. The default is false, which allows faster shutdown.

  • Dashboards and Widgets

    • The Single Value widget now supports interactions on both the Search and Dashboard page. See Manage Dashboard Interactions for more details on interactions.

    • Introduced Dashboards Interactions to add interactive elements to your dashboards.

      For more information, see Manage Dashboard Interactions.

    • It is now possible to set a temporary timezone in dashboards, which will be read from the URL on page load e.g. tz=Europe/Copenhagen.

      For more information, see Time Interval Settings.

  • Log Collector

  • Functions

  • Other

    • "Sticky" autoshards no longer mean that the system cannot tune their value, but only that it cannot decrease the number of shards; the cluster is allowed to raise the number of shards on datasources when it needs to, also for those that were set as sticky using the REST API.

    • Ephemeral nodes are automatically removed from the cluster if they are offline for too long (2 hours by default).

Fixed in this release

  • UI Changes

    • Fixed an issue that made switching UI theme report an error and only take effect for the current session.

    • Fixed an issue where the dashboard page would freeze when the value of a dashboard parameter was changed.

    • Fixed the UI as it were not showing an error when a query gets blocked due to query quota settings.

    • We have fixed tooltips in the query editor, which were hidden by other elements in the UI.

  • Automation and Alerts

    • For self-hosted: Automation for sending emails from Actions no longer uses the IP filter, allowing administrators not to put Automation on the IP allowlist.

  • GraphQL API

    • Pending deletes that would cause nodes to fail to start, reporting a NullPointerException, have been fixed.

  • Storage

    • Fixing mini-segment fetches as they failed to complete properly during queries, if the number of mini-segments involved was too large.

    • The noise from MiniSegmentMergeLatencyLoggerJob has been reduced by being more conservative about when we log mini segments that are unexpectedly not being merged. We have made MiniSegmentMergeLatencyLoggerJob take datasource idleness into account.

  • Configuration

    • Nodes are now considered ephemeral only if they set USING_EPHEMERAL_DISKS to true. Previously, they were ephemeral if they either set that configuration, or if they were using the httponly node role.

    • Fixed an issue where the IOC database could get out of sync. The IOC database will be re-downloaded upon upgrade, therefore IOCs won't be completely available for a while after the upgrade.

    • Removed compression type extreme for configuration COMPRESSION_TYPE. Specifying extreme will now select the default value of high in order not to cause configuration errors for clusters that specify extreme. The suggestion is to remove COMPRESSION_TYPE from your configurations unless you specify the only other non-default value of fast.

  • Ingestion

    • We have set a maximum number of events that we will parse under a single timeout so large batches are allowed to take longer. If you've seen parsers time out not because the parser is actually slow but because you were processing many events in a single batch, this change should cause that stop happening. Only parsers that are genuinely slow should now time out.

  • Queries

    • The query scheduling has been fixed as it could hit races with the background recompression of files in a way that resulted in the query missing the file and ending up adding warnings about segment files being missed by the query.

    • Fixed a failing require from MiniSegmentsAsTargetSegmentReader, causing queries to fail in very rare cases.

  • Functions

    • Queries ending with tail() will no longer be rendered with infinite scroll.

  • Other

    • Fixed an issue for the ingest API that made it possible to ingest into system repositories.

    • Fixing mini-segment downloads during queries, as they could cause download retries to fail spuriously, even if the download actually succeeded.

    • Timeout from publish to global topic in Kafka has been fixed, as it resulted in marking input segments for merge as broken temporarily.

Falcon LogScale 1.76.0 GA (2023-02-07)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.76.0GA2023-02-07

Cloud

2024-02-28No1.44.0No

Available for download two days after release.

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Configuration

    • Starting from 1.78 release, the default value for the MAX_INGEST_REQUEST_SIZE configuration will be reduced from 1 GB to 32 MB.

      This value limits the size of ingest request and rejects oversized requests.

      If the request is compressed within HTTP, then this restricts the size after decompressing.

Removed

Items that have been removed as of this release.

API

  • Removed the API for managing ingest tokens. This has long been deprecated and replaced by a GraphQL API.

New features and improvements

Fixed in this release

  • Other

    • Fixed an issue for the ingest API that made it possible to ingest into system repositories.

Falcon LogScale 1.75.0 GA (2023-01-31)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.75.0GA2023-01-31

Cloud

2024-02-28No1.44.0No

Available for download two days after release.

Bug fixes and updates.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The REST endpoint for testing actions has been deprecated. api/v1/repositories/repoId/alertnotifiers/actionId/test has been deprecated. The new GraphQL mutations should be used instead.

Upgrades

Changes that may occur or be required during an upgrade.

  • Other

    • Java upgraded to 17.0.6 in Docker containers

      Kafka upgraded to 3.3.2 for KAFKA-14379

      Kafka client upgraded to 3.3.2

      Kafka Docker container upgraded to 3.3.2

New features and improvements

  • UI Changes

    • Suggestions in Query Editor will show for certain function parameters like time formats.

    • Introduced Search Interactions to add custom event list options for all users in a repository.

      For more information, see Event List Interactions.

    • The Search page now supports timezone picking e.g. +02:00 Copenhagen. The timezone will be set on the users' session and remembered between pages.

      For more information, see Setting Time Zone.

    • You can now set your preferred timezone under Manage your Account.

    • Known field names are now shown as completion suggestions in Query Editor while you type.

  • GraphQL API

    • GraphQL API mutations have been added for testing actions without having to save them first. The added mutations are:

      • testEmailAction

      • testHumioRepoAction

      • testOpsGenieAction

      • testPagerDutyAction

      • testSlackAction

      • testSlackPostMessageAction

      • testUploadFileAction

      • testVictorOpsAction

      • testWebhookAction

      The previous testAction mutation has been removed.

      The new GraphQL API mutations' signature is almost the same as the create mutation for the same action, except that test actions require event data and a trigger name, as the previous testAction mutation did.

      As a consequence, the Test button is now always enabled in the UI.

  • Dashboards and Widgets

  • Functions

    • default() now supports assigning the same value to multiple fields, by passing multiple field names to the field parameter.

    • selectLast() and groupBy() now use less state size, allowing for larger result sets.

    • The performance of in() is improved when matching with values that do not use the * wildcard.

Fixed in this release

  • UI Changes

    • Fixed an issue that made switching UI theme report an error and only take effect for the current session.

    • Fixed the UI as it were not showing an error when a query gets blocked due to query quota settings.

  • Automation and Alerts

    • For self-hosted: Automation for sending emails from Actions no longer uses the IP filter, allowing administrators not to put Automation on the IP allowlist.

  • Queries

    • Fixed a failing require from MiniSegmentsAsTargetSegmentReader, causing queries to fail in very rare cases.

  • Functions

    • Queries ending with tail() will no longer be rendered with infinite scroll.

  • Other

    • Unlimited waits for nodes to get in sync has been fixed. This caused digest coordination to fail, to limit the time allowed for a node to get "in sync" on a partition before leadership was assigned to it, in cases where the previous digest leader shut down gracefully.

Falcon LogScale 1.74.0 GA (2023-01-24)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.74.0GA2023-01-24

Cloud

2024-02-28No1.44.0No

Available for download two days after release.

Bug fixes and updates.

New features and improvements

  • Security

    • When creating a new group you now have to add the group and add permissions for it in the same multi step dialog.

  • UI Changes

    • Changes have been made for the three-dot menu (⋮) used for Field Interactions:

      • It is now available from the Fields Panel and the Inspection Panel, see Searching Data.

      • Keyboard navigation has been improved.

      • For field interactions with live queries, the Fields Panel flyout will now display a fixed list of top values, keeping the values from the point in time when the menu was opened.

  • Automation and Alerts

  • Configuration

Fixed in this release

  • UI Changes

    • Fixed an issue where the dashboard page would freeze when the value of a dashboard parameter was changed.

    • We have fixed tooltips in the query editor, which were hidden by other elements in the UI.

  • Configuration

  • Ingestion

    • We have set a maximum number of events that we will parse under a single timeout so large batches are allowed to take longer. If you've seen parsers time out not because the parser is actually slow but because you were processing many events in a single batch, this change should cause that stop happening. Only parsers that are genuinely slow should now time out.

Falcon LogScale 1.73.0 GA (2023-01-17)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.73.0GA2023-01-17

Cloud

2024-02-28No1.44.0No

Available for download two days after release.

Bug fixes and updates.

New features and improvements

  • Functions

    • The query function holtwinters() has been removed from the product.

    • Using ioc:lookup() in a query while the IOC service is disabled will now result in a failed query instead of a warning, stating that there are partial results.

Fixed in this release

  • Storage

    • The noise from MiniSegmentMergeLatencyLoggerJob has been reduced by being more conservative about when we log mini segments that are unexpectedly not being merged. We have made MiniSegmentMergeLatencyLoggerJob take datasource idleness into account.

Falcon LogScale 1.72.0 GA (2023-01-10)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.72.0GA2023-01-10

Cloud

2024-02-28No1.44.0No

Available for download two days after release.

Bug fixes and updates.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The query function holtwinters() is now deprecated and will be removed along with the release of future version 1.73; therefore, its usage in alerts is not recommended.

Fixed in this release

  • Configuration

    • Removed compression type extreme for configuration COMPRESSION_TYPE. Specifying extreme will now select the default value of high in order not to cause configuration errors for clusters that specify extreme. The suggestion is to remove COMPRESSION_TYPE from your configurations unless you specify the only other non-default value of fast.

  • Queries

    • The query scheduling has been fixed as it could hit races with the background recompression of files in a way that resulted in the query missing the file and ending up adding warnings about segment files being missed by the query.

Falcon LogScale 1.71.0 GA (2023-01-03)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.71.0GA2023-01-03

Cloud

2024-02-28No1.44.0No

Available for download two days after release.

Bug fixes and updates.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The query function holtwinters() is now deprecated and will be removed along with the release of future version 1.73; therefore, its usage in alerts is not recommended.

Upgrades

Changes that may occur or be required during an upgrade.

  • Packages

    • Optimizations in package handling require migration of data during upgrade. This migration is performed automatically. Please notice:

      • While the upgrade of cluster nodes are ongoing, we recommend you do not install or update any packages, as they may end up in an inconsistent state.

        If a package ends up in a bad state during migration, it can be fixed simply by reinstalling the package.

      • You will potentially experience that accessing the list of installed packages will fail, and creating new dashboards, alerts, parsers, etc. based on package templates will not work as intended.

        This should only happen during the cluster upgrade, and should resolve itself once the cluster is fully upgraded.

      • If the cluster nodes are downgraded, any packages installed or updated while running the new version will not work, and we therefore recommend uninstalling or downgrading those packages prior to downgrading the cluster nodes.

New features and improvements

  • UI Changes

    • Tabs on the Users page are renamed: former Groups and Permissions tab is now renamed to Permissions; former Details tab is now renamed to Information. In addition, the Permissions tab is now displayed first — it is also the tab that will be opened by default when navigating to a user from other places in the product. See Manage users & permissions for a description of roles and permissions in the UI.

  • Configuration

    • The ability to keep the same merge target across digest changes is reintroduced. This feature was reverted in an earlier release due to a discovered issue where mini segments for an active merge target could end up spread across hosts. As that issue has been fixed, mini segments should now be stored on the hosts running digest for the target.

    • New dynamic configuration FlushSegmentsAndGlobalOnShutdown. When set, and when USING_EPHEMERAL_DISKS is set to true, forces all in-progress segments to be closed and uploaded to bucket, and also forces a write (and upload) of global snapshot during shutdown. When not set, this avoids the extra work and thus time shutting down from flushing very recent segments, as those can then be resumed on next boot, assuming that next boot continues on the same Kafka epoch. The default is false, which allows faster shutdown.

Fixed in this release

  • Configuration

    • Fixed an issue where the IOC database could get out of sync. The IOC database will be re-downloaded upon upgrade, therefore IOCs won't be completely available for a while after the upgrade.

Falcon LogScale 1.70.2 LTS (2023-03-06)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.70.2LTS2023-03-06

Cloud

2024-01-31No1.44.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.70.2/server-1.70.2.tar.gz

These notes include entries from the following previous releases: 1.70.0, 1.70.1

Security fix and bug fixes.

Deprecation

Items that have been deprecated and may be removed in a future release.

Upgrades

Changes that may occur or be required during an upgrade.

  • Installation and Deployment

    • We have enabled a new vhost selection method by default. The way hosts select their vhost number when joining the cluster has changed, the new logic is described at Node Identifiers documentation page.

      The new logic does not depend on ZooKeeper, even for clusters where nodes occasionally lose disk contents, such as Kubernetes. In order to smooth migration for clusters using ZooKeeper, the new logic will still interact with ZooKeeper to avoid nodes using a mix of new and old vhost code from fighting over the vhost numbers. This is only necessary while migrating.

      The recommended steps for migrating off of ZooKeeper are as follows:

      1. Deploy the new LogScale version to all nodes.

      2. Remove ZOOKEEPER_URL_FOR_NODE_UUID, ZOOKEEPER_URL, ZOOKEEPER_PREFIX_FOR_NODE_UUID, ZOOKEEPER_SESSIONTIMEOUT_FOR_NODE_UUID from the configuration for all nodes.

      3. Reboot

      Once rebooted, LogScale will no longer need ZooKeeper directly, except as an indirect dependency of Kafka. Due to this, the 4 ZooKeeper-related variables are deprecated as of this release and will be removed in a future version.

      Since vhost numbers now change when a disk is wiped, cluster administrators for clusters using nodes where USING_EPHEMERAL_DISKS is set to true will need to ensure that the storage and digest partitioning tables are up to date as hosts join and leave the cluster. Updating the tables is handled automatically if using the LogScale Kubernetes operator, but for clusters that do not use this operator, cluster administrators should run scripts periodically to keep the storage and digest tables up to date. This is not a new requirement for ephemeral clusters, but we're providing a reminder here since it may be needed more frequently now.

      The cluster GraphQL query can provide updated tables (the suggestedIngestPartitions and suggestedStoragePartitions fields), which can then be applied via the updateIngestPartitionScheme and updateStoragePartitionScheme GraphQL mutations.

      Should you experience any issue in using this feature, you may opt out by setting NEW_VHOST_SELECTION_ENABLED=false. If you do this, please reach out to support with feedback, as we otherwise intend to remove the old vhost selection logic in the coming months.

      Note

      When using Operator and Kubernetes deployments, you must upgrade to 0.17.0 of operator to support migration away from the ZooKeeper requirement. See Operator Version 0.17.0.

  • Other

    • Kafka client has been upgraded to 3.4.0.

      Kafka broker has been upgraded to 3.4.0 in the Kafka container.

      The container upgrade is performed for security reasons to resolve CVE-2022-36944 issue, which Kafka should however not be affected by. If you wish to do a rolling upgrade of your Kafka containers, please always refer to Kafka upgrade guide.

New features and improvements

  • Dashboards and Widgets

    • Added support for export and import of dashboards with query based widgets which use a fixed time window.

  • Other

    • Add code to ensure all mini-segments for the same target end up located on the same hosts. A change in 1.63 could create a situation where mini-segments for the same merge target wound up on different nodes, which the query code currently assumes can't happen. This could cause Result is partial responses to user queries.

    • Ephemeral nodes are automatically removed from the cluster if they are offline for too long (2 hours by default).

    • New background task TagGroupingSuggestionsJob that reports on flow rate in repositories with many datasources on what it considers slow ones, controlled by configuration of segment sizes and flush intervals. The output in the log can be input to decision on add Tag Grouping to a repository to reduce the number of slow datasources.

Fixed in this release

  • Security

    • Update Netty to address CVE-2022-41915.

  • Automation and Alerts

    • Fixed a bug where a link in the notification for a failed alert would link to a non-existing page.

  • GraphQL API

    • Pending deletes that would cause nodes to fail to start, reporting a NullPointerException, have been fixed.

  • Configuration

  • Dashboards and Widgets

    • Fixed three bugs in the Bar Chart — where the sorting would be wrong with updating query results in the stacked version, flickering would occur when deselecting all series in the legend, and deselecting renamed series in the legend would not have any effect.

    • Scatter Chart has been updated:

      • The x-axis would not update correctly with updated query results

      • The trend line toggle in the style panel was invisible.

    • Fixed an issue with parameters in dashboards, where the values of a fixed list parameter would not have their order maintained when exporting and importing templates.

  • Other

    • Fixed a bug where very long string literals in a regex could cause a query/parser to fail with a stack overflow.

    • Unlimited waits for nodes to get in sync has been fixed. This caused digest coordination to fail, to limit the time allowed for a node to get "in sync" on a partition before leadership was assigned to it, in cases where the previous digest leader shut down gracefully.

    • Timeout from publish to global topic in Kafka has been fixed, as it resulted in marking input segments for merge as broken temporarily.

Known Issues

Falcon LogScale 1.70.1 LTS (2023-02-01)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.70.1LTS2023-02-01

Cloud

2024-01-31No1.44.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.70.1/server-1.70.1.tar.gz

These notes include entries from the following previous releases: 1.70.0

Bug fixes and updates.

Deprecation

Items that have been deprecated and may be removed in a future release.

Upgrades

Changes that may occur or be required during an upgrade.

  • Installation and Deployment

    • We have enabled a new vhost selection method by default. The way hosts select their vhost number when joining the cluster has changed, the new logic is described at Node Identifiers documentation page.

      The new logic does not depend on ZooKeeper, even for clusters where nodes occasionally lose disk contents, such as Kubernetes. In order to smooth migration for clusters using ZooKeeper, the new logic will still interact with ZooKeeper to avoid nodes using a mix of new and old vhost code from fighting over the vhost numbers. This is only necessary while migrating.

      The recommended steps for migrating off of ZooKeeper are as follows:

      1. Deploy the new LogScale version to all nodes.

      2. Remove ZOOKEEPER_URL_FOR_NODE_UUID, ZOOKEEPER_URL, ZOOKEEPER_PREFIX_FOR_NODE_UUID, ZOOKEEPER_SESSIONTIMEOUT_FOR_NODE_UUID from the configuration for all nodes.

      3. Reboot

      Once rebooted, LogScale will no longer need ZooKeeper directly, except as an indirect dependency of Kafka. Due to this, the 4 ZooKeeper-related variables are deprecated as of this release and will be removed in a future version.

      Since vhost numbers now change when a disk is wiped, cluster administrators for clusters using nodes where USING_EPHEMERAL_DISKS is set to true will need to ensure that the storage and digest partitioning tables are up to date as hosts join and leave the cluster. Updating the tables is handled automatically if using the LogScale Kubernetes operator, but for clusters that do not use this operator, cluster administrators should run scripts periodically to keep the storage and digest tables up to date. This is not a new requirement for ephemeral clusters, but we're providing a reminder here since it may be needed more frequently now.

      The cluster GraphQL query can provide updated tables (the suggestedIngestPartitions and suggestedStoragePartitions fields), which can then be applied via the updateIngestPartitionScheme and updateStoragePartitionScheme GraphQL mutations.

      Should you experience any issue in using this feature, you may opt out by setting NEW_VHOST_SELECTION_ENABLED=false. If you do this, please reach out to support with feedback, as we otherwise intend to remove the old vhost selection logic in the coming months.

      Note

      When using Operator and Kubernetes deployments, you must upgrade to 0.17.0 of operator to support migration away from the ZooKeeper requirement. See Operator Version 0.17.0.

New features and improvements

  • Dashboards and Widgets

    • Added support for export and import of dashboards with query based widgets which use a fixed time window.

  • Other

    • Add code to ensure all mini-segments for the same target end up located on the same hosts. A change in 1.63 could create a situation where mini-segments for the same merge target wound up on different nodes, which the query code currently assumes can't happen. This could cause Result is partial responses to user queries.

    • New background task TagGroupingSuggestionsJob that reports on flow rate in repositories with many datasources on what it considers slow ones, controlled by configuration of segment sizes and flush intervals. The output in the log can be input to decision on add Tag Grouping to a repository to reduce the number of slow datasources.

Fixed in this release

  • Security

    • Update Netty to address CVE-2022-41915.

  • Automation and Alerts

    • Fixed a bug where a link in the notification for a failed alert would link to a non-existing page.

  • Configuration

  • Dashboards and Widgets

    • Fixed three bugs in the Bar Chart — where the sorting would be wrong with updating query results in the stacked version, flickering would occur when deselecting all series in the legend, and deselecting renamed series in the legend would not have any effect.

    • Scatter Chart has been updated:

      • The x-axis would not update correctly with updated query results

      • The trend line toggle in the style panel was invisible.

    • Fixed an issue with parameters in dashboards, where the values of a fixed list parameter would not have their order maintained when exporting and importing templates.

  • Other

    • Fixed a bug where very long string literals in a regex could cause a query/parser to fail with a stack overflow.

    • Unlimited waits for nodes to get in sync has been fixed. This caused digest coordination to fail, to limit the time allowed for a node to get "in sync" on a partition before leadership was assigned to it, in cases where the previous digest leader shut down gracefully.

Known Issues

Falcon LogScale 1.70.0 LTS (2023-01-16)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.70.0LTS2023-01-16

Cloud

2024-01-31No1.44.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.70.0/server-1.70.0.tar.gz

Bug fixes and updates.

Deprecation

Items that have been deprecated and may be removed in a future release.

Upgrades

Changes that may occur or be required during an upgrade.

  • Installation and Deployment

    • We have enabled a new vhost selection method by default. The way hosts select their vhost number when joining the cluster has changed, the new logic is described at Node Identifiers documentation page.

      The new logic does not depend on ZooKeeper, even for clusters where nodes occasionally lose disk contents, such as Kubernetes. In order to smooth migration for clusters using ZooKeeper, the new logic will still interact with ZooKeeper to avoid nodes using a mix of new and old vhost code from fighting over the vhost numbers. This is only necessary while migrating.

      The recommended steps for migrating off of ZooKeeper are as follows:

      1. Deploy the new LogScale version to all nodes.

      2. Remove ZOOKEEPER_URL_FOR_NODE_UUID, ZOOKEEPER_URL, ZOOKEEPER_PREFIX_FOR_NODE_UUID, ZOOKEEPER_SESSIONTIMEOUT_FOR_NODE_UUID from the configuration for all nodes.

      3. Reboot

      Once rebooted, LogScale will no longer need ZooKeeper directly, except as an indirect dependency of Kafka. Due to this, the 4 ZooKeeper-related variables are deprecated as of this release and will be removed in a future version.

      Since vhost numbers now change when a disk is wiped, cluster administrators for clusters using nodes where USING_EPHEMERAL_DISKS is set to true will need to ensure that the storage and digest partitioning tables are up to date as hosts join and leave the cluster. Updating the tables is handled automatically if using the LogScale Kubernetes operator, but for clusters that do not use this operator, cluster administrators should run scripts periodically to keep the storage and digest tables up to date. This is not a new requirement for ephemeral clusters, but we're providing a reminder here since it may be needed more frequently now.

      The cluster GraphQL query can provide updated tables (the suggestedIngestPartitions and suggestedStoragePartitions fields), which can then be applied via the updateIngestPartitionScheme and updateStoragePartitionScheme GraphQL mutations.

      Should you experience any issue in using this feature, you may opt out by setting NEW_VHOST_SELECTION_ENABLED=false. If you do this, please reach out to support with feedback, as we otherwise intend to remove the old vhost selection logic in the coming months.

      Note

      When using Operator and Kubernetes deployments, you must upgrade to 0.17.0 of operator to support migration away from the ZooKeeper requirement. See Operator Version 0.17.0.

New features and improvements

  • Dashboards and Widgets

    • Added support for export and import of dashboards with query based widgets which use a fixed time window.

  • Other

    • Add code to ensure all mini-segments for the same target end up located on the same hosts. A change in 1.63 could create a situation where mini-segments for the same merge target wound up on different nodes, which the query code currently assumes can't happen. This could cause Result is partial responses to user queries.

    • New background task TagGroupingSuggestionsJob that reports on flow rate in repositories with many datasources on what it considers slow ones, controlled by configuration of segment sizes and flush intervals. The output in the log can be input to decision on add Tag Grouping to a repository to reduce the number of slow datasources.

Fixed in this release

  • Security

    • Update Netty to address CVE-2022-41915.

  • Automation and Alerts

    • Fixed a bug where a link in the notification for a failed alert would link to a non-existing page.

  • Dashboards and Widgets

    • Fixed three bugs in the Bar Chart — where the sorting would be wrong with updating query results in the stacked version, flickering would occur when deselecting all series in the legend, and deselecting renamed series in the legend would not have any effect.

    • Scatter Chart has been updated:

      • The x-axis would not update correctly with updated query results

      • The trend line toggle in the style panel was invisible.

    • Fixed an issue with parameters in dashboards, where the values of a fixed list parameter would not have their order maintained when exporting and importing templates.

  • Other

    • Fixed a bug where very long string literals in a regex could cause a query/parser to fail with a stack overflow.

Known Issues

Falcon LogScale 1.69.0 GA (2022-12-13)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.69.0GA2022-12-13

Cloud

2024-01-31No1.44.0No

Available for download two days after release.

Bug fixes and updates.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The query function holtwinters() is now deprecated and will be removed along with the release of future version 1.73; therefore, its usage in alerts is not recommended.

New features and improvements

  • Storage

    • Reduced CPU usage of background tasks for the case of high partition count and high datasource count.

  • Queries

    • Add support for GET+DELETE requests for queries by external Query ID without including the repository name in the URL. The new URL is /api/v1/queryjobs/QUERYID. Note that shared dashboard token authentication is not supported on this API. (The existing API on /api/v1/repositories/REPONAM/queryjobs/QUERYID remains unmodified and support POST requests for submit of queries.)

  • Other

    • Add bounds to maximum number of active notifications per user.

    • Added option to filter by group and role permission types in groupsPage and rolesPage queries.

    • Throttle publish to global-events topic internally based on time spent in recent transactions of the same type (digest-related writes are not throttled). See details at new configuration variable GLOBAL_THROTTLE_PERCENTAGE page. Also see the metric global-operation-time for measurements of the time spent.

Fixed in this release

  • Other

    • Allow creating Kafka topics even if a broker is down.

    • LogScale no longer considers every host to be alive for a period after rebooting. Only hosts marked as running in global will be considered alive. This fixes an issue where a query coordinator might pointlessly direct queries to dead nodes because the coordinator had recently booted.

Falcon LogScale 1.68.0 GA (2022-12-06)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.68.0GA2022-12-06

Cloud

2024-01-31No1.44.0No

Available for download two days after release.

Bug fixes and updates.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The query function holtwinters() is now deprecated and will be removed along with the release of future version 1.73; therefore, its usage in alerts is not recommended.

New features and improvements

  • Falcon Data Replicator

    • Enforcing S3 file size limits (30MB) in FDR feeds. Files will not be ingested if they are above the limit.

  • UI Changes

    • Introduced the Social Login Settings feature: all customers with access to the organization identity providers page can now change social login settings in the UI. See Authentication & Identity Providers for details.

    • No longer possible to add a color to roles. Existing role colors removed from the UI.

  • Automation and Alerts

    • Improved performance when storing alert errors and trigger times in global.

  • Configuration

    • Set dynamic configuration BucketStorageWriteVersion to 3. This sets the format for files written to bucket storage to use a format that allows files larger than 2GB and incurs less memory pressure when decrypting files during download from the bucket. The new format is supported since 1.44.0 only.

    • Set minimum version for the cluster to be 1.44.0.

    • Changed the default value for AUTHENTICATION_METHOD from none to single-user.

      To set username and password, use the environment variables:

      See SINGLE_USER_USERNAME and SINGLE_USER_PASSWORD documentation for more details on these variables.

  • Other

    • Audit logging has been improved.

    • New metric global-operation-time: tracks local time spent processing each kind of message received on the global events topic.

Fixed in this release

  • UI Changes

    • The warning about unsaved changes being lost on an edited dashboard will now only show when actual changes have been made.

  • Other

    • Fixed a bug in decryption code used when decrypting downloaded files from bucket storage when version-for-bucket-writes=3. The bug did not allow to decrypt files larger than 2GB.

    • Fixed an issue where LogScale could log secrets to the debug log when configured to use LDAP or when configured to use SSL for Kafka.

Falcon LogScale 1.67.0 GA (2022-11-29)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.67.0GA2022-11-29

Cloud

2024-01-31No1.30.0No

Available for download two days after release.

Bug fixes and updates.

New features and improvements

  • Functions

    • A new query function named createEvents() has been released. This function creates events from strings and is used for testing queries.

Fixed in this release

  • UI Changes

    • URL paths with repository name and no trailing /search resolved to Not Found. The URL /repoName will now again show the search page for the repoName repository.

    • IP Location drilldowns now correctly use lat and lon field names instead of latitude and longitude .

  • Functions

    • Bugs fixed for the collect() function, where:

      • The function mistakenly warned about exceeding its limit if the number of collected values was equal to the limit.

      • The limit parameter was not applied correctly.

Falcon LogScale 1.66.0 GA (2022-11-22)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.66.0GA2022-11-22

Cloud

2024-01-31No1.30.0Yes

Available for download two days after release.

Bug fixes and updates.

New features and improvements

  • Configuration

  • Other

    • Make adjustments to the HostsCleanerJob. It will now remove references to missing hosts in fewer writes, and will stop immediately if a host rejoins the cluster.

Fixed in this release

  • Functions

    • Fixed a bug seen in version 1.65 where groupBy() on multiple fields would sometimes produce multiple rows for the same combination of keys.

  • Other

    • Fixed an issue that could cause repeated unnecessary updates of currentHosts for some segments.

    • Fixed a race where unavailable segments, due to nodes going away, would not become available again after nodes returning.

    • Fixed an issue that could cause the error message Object is missing required member replicationFactor when downgrading from current versions to older versions. The error message is only a nuisance, since the object failing deserialization isn't in use in released code yet.

  • Packages

    • Fixed an issue where deleting a parser through an update or uninstall of a package could fail in an unexpected way if the parser was used by an ingest listener or an FDR feed. Now, a proper error message will be shown.

Falcon LogScale 1.65.0 GA (2022-11-15)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.65.0GA2022-11-15

Cloud

2024-01-31No1.30.0No

Available for download two days after release.

Bug fixes and updates.

New features and improvements

  • Security

    • The version of Jackson has been upgraded to address CVE-2022-42003 vulnerability.

  • UI Changes

    • The Repository icon has been changed to match the new look and feel.

    • A new UI for event forwarders located under Organisation Settings now allows you to configure your event forwarders. See Event Forwarders for details.

  • Automation and Alerts

    • Added a query editor warning (yellow wavy lines) for joins in alerts.

  • Configuration

    • Added a new dynamic configuration UndersizedMergingRetentionPercentage, with a default value of 20. This configuration value is used when selecting undersized segments to merge, this setting controls how wide a time span can be merged together.

      The setting is interpreted as a percentage of the repository's retention by time setting. A reasonable range is 0 through to 90.

  • Other

    • Docker images have been upgraded to Java 17.0.5.

    • Audit logging for s3 archiving which tracks when it is enabled, disabled, configured, and restarted.

    • Increased the limits for bucket. The maximum number of series has been raised from 50 to 500 and the maximum number of output events has been raised from 10,000 to 100,000.

    • Avoid writing some messages to global if we can tell up-front that the message is unnecessary.

    • Reduce the scope of a precondition for a particular write to global. This should reduce unnecessary transaction rejections when such writes are bulked together.

Fixed in this release

  • Configuration

    • Fix some issues with the workings of the BUCKET_STORAGE_MULTIPLE_ENDPOINTS and S3_STORAGE_ENDPOINT_BASE configurations.

      The intent of this configuration is to allow users to configure buckets in multiple bucket services, for instance to allow migrating from AWS bucket storage to a local S3 service. When true, each bucket in global can have a separate endpoint configuration, as defined in S3_STORAGE_ENDPOINT_BASE and similar configurations. This allows an existing cluster running against AWS S3 to begin uploading segments to an on-prem S3 by switching the endpoint base, while still keeping access to existing segments in AWS.

      When false (default), the endpoint base configuration is applied to all existing buckets on boot. This is intended for cases where the base URL needs to be changed for all bucket, for instance due to the introduction of a proxy.

      The issue was that we were not consistently looking up endpoint urls in global for the relevant bucket, but instead simply used whichever endpoint url happened to be defined in configuration at the time. This has been fixed.

  • Other

    • The SAML login to Humio using deeplinks now works correctly.

    • When a host is removed from global, a job tries to clean up any references to it from other places in global, such as segments. Fixed a bug in this job that meant it didn't clean up references on segments that were tombstoned but not yet gone from global. This issue could block cleanup of those segments.

    • Fixed a minor desynchronization issue related to idle datasources.

    • Fixed a bug where interaction context menus did not update the query editor in Safari.

Falcon LogScale 1.64.0 GA (2022-11-01)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.64.0GA2022-11-01

Cloud

2024-01-31No1.30.0No

Available for download two days after release.

Bug fixes and updates.

New features and improvements

  • Configuration

    • Added new dynamic configuration for MaxIngestRequestSize to allow limiting size of ingest requests after content-encoding has been applied. The default can be set using the new configuration variable MAX_INGEST_REQUEST_SIZE, or applied via the dynamic configuration.

  • Dashboards and Widgets

    • It is now possible to specify a dashboard by name in the URL. It is also possible to have dashboard ID as a parameter in order to have permanent links.

  • Functions

    • Improved memory allocation for the query function split().

    • Removed the restriction that case and match expressions cannot be used in subqueries.

    • The query function split() now allows splitting arrays that contain arrays. For example, the event a[0][0]=1, a[1][0]=2 can now be split using split(a) produces two events: _index=0, a[0]=1' and '_index=1, a[0]=2.

    • The query function join() now provides information to optimize the query.

    • The query function in() has been improved w.r.t. performance when searching in tag fields.

  • Other

    • In the internal request log, include decoded size of the request body after content-encoding has been applied in new field decodedContentLength. This allows inspecting compression ratio of incoming requests and range of values seen. Requests without compression have contentLength in this new field too.

Fixed in this release

  • UI Changes

    • The Copy menu in the event Inspection Panel now copies text correctly again.

    • Fixed a bug where a disabled item in the main menu could be clicked on and which would redirect to the homepage.

  • Functions

  • Other

    • Fixed an issue that could cause merged segments to appear to be missing after a restart, due to the datasource going idle.

Falcon LogScale 1.63.6 LTS (2023-03-22)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.63.6LTS2023-03-22

Cloud

2023-11-30No1.30.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.63.6/server-1.63.6.tar.gz

These notes include entries from the following previous releases: 1.63.1, 1.63.2, 1.63.3, 1.63.4, 1.63.5

Bug fix.

Removed

Items that have been removed as of this release.

Installation and Deployment

  • Deprecated feature removal: the file-based backup feature was deprecated in 1.42.0 and is now removed from Humio. The following configs are no longer supported and will do nothing if set:

    The DELETE_BACKUP_AFTER_MILLIS config is still supported, as it is used for configuring the delay between a file being marked for deletion in Humio, and that file being removed from bucket storage.

Upgrades

Changes that may occur or be required during an upgrade.

  • Other

    • Kafka client has been upgraded to 3.4.0.

      Kafka broker has been upgraded to 3.4.0 in the Kafka container.

      The container upgrade is performed for security reasons to resolve CVE-2022-36944 issue, which Kafka should however not be affected by. If you wish to do a rolling upgrade of your Kafka containers, please always refer to Kafka upgrade guide.

New features and improvements

  • Security

    • The version of Jackson has been upgraded to address CVE-2022-42003 vulnerability.

  • Falcon Data Replicator

  • UI Changes

    • Humio is now a Falcon product. The Humio owl logo and icons are therefore replaced by beautiful falcons.

    • Change Humio logo to Falcon LogScale on login and signup pages.

    • Interactions on JSON data now enabled for JSON arrays in the Event List.

    • Parsing JSON arrays in drill-down context menus no longer adds a trailing dot to the prefix field name.

    • The Single Value widget has updated properties:

      • New design for the toggle switch: it is now bigger and has a green/gray color profile instead of blue/gray.

      • The color profile of the displayed value by trend is now customizable.

    • Following its name change, mentions of Humio have been changed to Falcon LogScale.

    • Add Falcon LogScale announcement on login and signup pages.

    • Contextual drill-down menus for field interactions have been introduced, see Field Interactions. In particular:

      • Fields in the Inspection Panel are now provided with Drill down and Copy context menu items, replacing the former + - GroupBy buttons, see updates at Inspecting Events.

      • The Fields Panel on the left-hand side of the User Interface is now provided with Drill down and Copy context menu items, replacing the former drill-down buttons in the field details flyout (when clicking a field in the fields menu). See updates at Displaying Fields.

      • Fields that have JSON, URL and Timestamps content will have a Parse drill-down option which will parse the field as a LogScale field.

        Parsing JSON will automatically use the field name as prefix for the new field name.

      • Fields containing numbers (currently JSON only) will have Sum, Max, Min, Max values, and Percentiles drill-down options.

  • Automation and Alerts

    • Added two new message templates to actions, {query_start_s} and {query_end_s}. See Message Templates and Variables for details.

    • Self-hosted only: the old implementation of how alert queries are run has been removed. As a consequence, the dynamic configuration UseLegacyAlertJob has also been removed.

  • GraphQL API

  • Dashboards and Widgets

    • JSON in Log Line and JSON formats columns in Event List widgets now have fields underlined on hover and are clickable. This allows drill-downs and copying values easily.

  • Functions

    • QueryAPI — Added staticMetaData property to QueryJobStartedResult. At the moment it only contains the property executionMode, which can be used to communicate hints about the way the backend executes the query to the front-end.

    • Improved the format():

      • Fixed an issue where the format() function would output the wrong amount of left padded zeros for decimal conversions.

      • Formatting large positive numbers as hex no longer causes a loss of bits for integers less than 2^63.

      • Formatting negative numbers as hex no longer produces unintelligible strings.

      • Fixed an issue where adding the # flag would not display the correct formatting string.

      • Fixed an issue where specifying the time/date modifier N would fail to parse.

      • Fixed an issue where supplying multiple fields required you to specify the index of the last field as an argument specifier.

      • Added a length specifier to allow for outputting fields as 32-bit integers instead of 64-bits.

      • Using the type specifier %F now tries to format the specified field as a floating point.

      See the format() reference documentation page for all the above mentioned updates on the supported formatting syntax.

    • QueryAPI — executionModeHint renamed to executionMode.

    • Introduced new valid array syntax in array:contains() and array:regex() functions:

      • Changed the expected format of the array parameter.

      • Changed these functions to no longer be experimental.

  • Other

    • Add code to ensure all mini-segments for the same target end up located on the same hosts. A change in 1.63 could create a situation where mini-segments for the same merge target wound up on different nodes, which the query code currently assumes can't happen. This could cause Result is partial responses to user queries.

    • When selecting a parser test case, the selected test case is highlighted in the UI, so you can see what is selected.

    • Added a new dynamic configuration UndersizedMergingRetentionPercentage, with a default value of 20. This configuration value is used when selecting undersized segments to merge, this setting controls how wide a time span can be merged together.

      The setting is interpreted as a percentage of the repository's retention by time setting. A reasonable range is 0 through to 90.

    • New background task that runs at startup. It verifies the checksums present in local segment files, traversing the most recently updated segment files on the local disk, using the timestamps they have when Humio status. If a file has invalid checksum it will be renamed to crc-error.X where X is the ID of the segment. An error will be logged as well.

    • Add an additional validation check when uploading files to S3-like bucket storage. Humio will now perform a HEAD request for the file's final location in the bucket to verify that the upload succeeded.

    • Added use of the HTTP Proxy Client Configuration, if configured, in a lot of places.

    • Add a script in the tarball distribution's bin directory to check the execution environment, checking common permission issues and other requirements for an environment suitable for running LogScale.

    • Added a new ingest endpoint for receiving metrics and traces via OpenTelemetry OTLP/http. See Ingesting with OpenTelemetry for all the details.

    • Empty datasource directories will be now removed from the local file system while starting the server.

    • Created new test function for event forwarders, which takes as input an event forwarder configuration and tests whether it is possible to connect to the Kafka server. The current test function which takes an ID as input and tests an existing event forwarder by ID, is now marked as deprecated.

    • Use latest version of Java 17 in Docker images.

    • It is now possible to expand multiple bell notifications.

Fixed in this release

  • Security

    • Update Netty to address CVE-2022-41915.

  • UI Changes

    • URL paths with repository name and no trailing /search resolved to Not Found. The URL /repoName will now again show the search page for the repoName repository.

    • Change missing @timestamp field to give a warning instead of an error in functions tail(), head(), bucket(), and timeChart().

  • Automation and Alerts

    • Fixed a bug where a link in the notification for a failed alert would link to a non-existing page.

  • API

    • Fixed an issue with API Explorer that could fail to load in some configurations when using cookie authentication.

  • Configuration

  • Dashboards and Widgets

    • Fixed a bug where query result containing no valid results was handled incorrectly in visualisation.

    • Bug fixed in Scatter Chart widget tooltip, so that the description of the actual point only is shown in the tooltip when hovering the mouse over one point, instead of multiple points.

  • Functions

    • Fixed an issue where match() would sometimes give errors when ignoreCase=true and events contained latin1 encoded characters.

    • Fixed an issue where NaN values could cause groupBy() queries to fail.

    • Fixed a bug where the selfJoin() function would not apply the postfilter parameter.

  • Other

    • Unlimited waits for nodes to get in sync has been fixed. This caused digest coordination to fail, to limit the time allowed for a node to get "in sync" on a partition before leadership was assigned to it, in cases where the previous digest leader shut down gracefully.

    • Fixed an issue where nothing was displayed on the average ingest chart in case only one datapoint is present.

    • It is now possible for a user to use the same personal invite token after the user has been transferred to another organization.

    • When selecting a parser test case, the selected test case is highlighted in the UI, so you can see what is selected.

    • Fixed a bug in decryption code used when decrypting downloaded files from bucket storage when version-for-bucket-writes=3. The bug did not allow to decrypt files larger than 2GB.

    • Fixed a regression causing a reflective method lookup to fail when Humio is running on a Java prior to 13.

    • When a host is removed from global, a job tries to clean up any references to it from other places in global, such as segments. Fixed a bug in this job that meant it didn't clean up references on segments that were tombstoned but not yet gone from global. This could block cleanup of those segments.

    • Fix an issue that could cause event redaction tasks to fail to complete, if a segment having events redacted was deleted due to retention.

    • Fix an issue causing a content-length check for bucket uploads to fail when encryption was enabled. The content-length check is not normally enabled, so this should only affect clusters that have disabled ETag-based validation.

    • Fixed an issue where LogScale could log secrets to the debug log when configured to use LDAP or when configured to use SSL for Kafka.

Known Issues

Falcon LogScale 1.63.5 LTS (2023-03-06)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.63.5LTS2023-03-06

Cloud

2023-11-30No1.30.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.63.5/server-1.63.5.tar.gz

These notes include entries from the following previous releases: 1.63.1, 1.63.2, 1.63.3, 1.63.4

Security fix.

Removed

Items that have been removed as of this release.

Installation and Deployment

  • Deprecated feature removal: the file-based backup feature was deprecated in 1.42.0 and is now removed from Humio. The following configs are no longer supported and will do nothing if set:

    The DELETE_BACKUP_AFTER_MILLIS config is still supported, as it is used for configuring the delay between a file being marked for deletion in Humio, and that file being removed from bucket storage.

Upgrades

Changes that may occur or be required during an upgrade.

  • Other

    • Kafka client has been upgraded to 3.4.0.

      Kafka broker has been upgraded to 3.4.0 in the Kafka container.

      The container upgrade is performed for security reasons to resolve CVE-2022-36944 issue, which Kafka should however not be affected by. If you wish to do a rolling upgrade of your Kafka containers, please always refer to Kafka upgrade guide.

New features and improvements

  • Security

    • The version of Jackson has been upgraded to address CVE-2022-42003 vulnerability.

  • Falcon Data Replicator

  • UI Changes

    • Humio is now a Falcon product. The Humio owl logo and icons are therefore replaced by beautiful falcons.

    • Change Humio logo to Falcon LogScale on login and signup pages.

    • Interactions on JSON data now enabled for JSON arrays in the Event List.

    • Parsing JSON arrays in drill-down context menus no longer adds a trailing dot to the prefix field name.

    • The Single Value widget has updated properties:

      • New design for the toggle switch: it is now bigger and has a green/gray color profile instead of blue/gray.

      • The color profile of the displayed value by trend is now customizable.

    • Following its name change, mentions of Humio have been changed to Falcon LogScale.

    • Add Falcon LogScale announcement on login and signup pages.

    • Contextual drill-down menus for field interactions have been introduced, see Field Interactions. In particular:

      • Fields in the Inspection Panel are now provided with Drill down and Copy context menu items, replacing the former + - GroupBy buttons, see updates at Inspecting Events.

      • The Fields Panel on the left-hand side of the User Interface is now provided with Drill down and Copy context menu items, replacing the former drill-down buttons in the field details flyout (when clicking a field in the fields menu). See updates at Displaying Fields.

      • Fields that have JSON, URL and Timestamps content will have a Parse drill-down option which will parse the field as a LogScale field.

        Parsing JSON will automatically use the field name as prefix for the new field name.

      • Fields containing numbers (currently JSON only) will have Sum, Max, Min, Max values, and Percentiles drill-down options.

  • Automation and Alerts

    • Added two new message templates to actions, {query_start_s} and {query_end_s}. See Message Templates and Variables for details.

    • Self-hosted only: the old implementation of how alert queries are run has been removed. As a consequence, the dynamic configuration UseLegacyAlertJob has also been removed.

  • GraphQL API

  • Dashboards and Widgets

    • JSON in Log Line and JSON formats columns in Event List widgets now have fields underlined on hover and are clickable. This allows drill-downs and copying values easily.

  • Functions

    • QueryAPI — Added staticMetaData property to QueryJobStartedResult. At the moment it only contains the property executionMode, which can be used to communicate hints about the way the backend executes the query to the front-end.

    • Improved the format():

      • Fixed an issue where the format() function would output the wrong amount of left padded zeros for decimal conversions.

      • Formatting large positive numbers as hex no longer causes a loss of bits for integers less than 2^63.

      • Formatting negative numbers as hex no longer produces unintelligible strings.

      • Fixed an issue where adding the # flag would not display the correct formatting string.

      • Fixed an issue where specifying the time/date modifier N would fail to parse.

      • Fixed an issue where supplying multiple fields required you to specify the index of the last field as an argument specifier.

      • Added a length specifier to allow for outputting fields as 32-bit integers instead of 64-bits.

      • Using the type specifier %F now tries to format the specified field as a floating point.

      See the format() reference documentation page for all the above mentioned updates on the supported formatting syntax.

    • QueryAPI — executionModeHint renamed to executionMode.

    • Introduced new valid array syntax in array:contains() and array:regex() functions:

      • Changed the expected format of the array parameter.

      • Changed these functions to no longer be experimental.

  • Other

    • Add code to ensure all mini-segments for the same target end up located on the same hosts. A change in 1.63 could create a situation where mini-segments for the same merge target wound up on different nodes, which the query code currently assumes can't happen. This could cause Result is partial responses to user queries.

    • When selecting a parser test case, the selected test case is highlighted in the UI, so you can see what is selected.

    • Added a new dynamic configuration UndersizedMergingRetentionPercentage, with a default value of 20. This configuration value is used when selecting undersized segments to merge, this setting controls how wide a time span can be merged together.

      The setting is interpreted as a percentage of the repository's retention by time setting. A reasonable range is 0 through to 90.

    • New background task that runs at startup. It verifies the checksums present in local segment files, traversing the most recently updated segment files on the local disk, using the timestamps they have when Humio status. If a file has invalid checksum it will be renamed to crc-error.X where X is the ID of the segment. An error will be logged as well.

    • Add an additional validation check when uploading files to S3-like bucket storage. Humio will now perform a HEAD request for the file's final location in the bucket to verify that the upload succeeded.

    • Added use of the HTTP Proxy Client Configuration, if configured, in a lot of places.

    • Add a script in the tarball distribution's bin directory to check the execution environment, checking common permission issues and other requirements for an environment suitable for running LogScale.

    • Added a new ingest endpoint for receiving metrics and traces via OpenTelemetry OTLP/http. See Ingesting with OpenTelemetry for all the details.

    • Empty datasource directories will be now removed from the local file system while starting the server.

    • Created new test function for event forwarders, which takes as input an event forwarder configuration and tests whether it is possible to connect to the Kafka server. The current test function which takes an ID as input and tests an existing event forwarder by ID, is now marked as deprecated.

    • Use latest version of Java 17 in Docker images.

    • It is now possible to expand multiple bell notifications.

Fixed in this release

  • Security

    • Update Netty to address CVE-2022-41915.

  • UI Changes

    • URL paths with repository name and no trailing /search resolved to Not Found. The URL /repoName will now again show the search page for the repoName repository.

    • Change missing @timestamp field to give a warning instead of an error in functions tail(), head(), bucket(), and timeChart().

  • Automation and Alerts

    • Fixed a bug where a link in the notification for a failed alert would link to a non-existing page.

  • Configuration

  • Dashboards and Widgets

    • Fixed a bug where query result containing no valid results was handled incorrectly in visualisation.

    • Bug fixed in Scatter Chart widget tooltip, so that the description of the actual point only is shown in the tooltip when hovering the mouse over one point, instead of multiple points.

  • Functions

    • Fixed an issue where match() would sometimes give errors when ignoreCase=true and events contained latin1 encoded characters.

    • Fixed an issue where NaN values could cause groupBy() queries to fail.

    • Fixed a bug where the selfJoin() function would not apply the postfilter parameter.

  • Other

    • Unlimited waits for nodes to get in sync has been fixed. This caused digest coordination to fail, to limit the time allowed for a node to get "in sync" on a partition before leadership was assigned to it, in cases where the previous digest leader shut down gracefully.

    • Fixed an issue where nothing was displayed on the average ingest chart in case only one datapoint is present.

    • It is now possible for a user to use the same personal invite token after the user has been transferred to another organization.

    • When selecting a parser test case, the selected test case is highlighted in the UI, so you can see what is selected.

    • Fixed a bug in decryption code used when decrypting downloaded files from bucket storage when version-for-bucket-writes=3. The bug did not allow to decrypt files larger than 2GB.

    • Fixed a regression causing a reflective method lookup to fail when Humio is running on a Java prior to 13.

    • When a host is removed from global, a job tries to clean up any references to it from other places in global, such as segments. Fixed a bug in this job that meant it didn't clean up references on segments that were tombstoned but not yet gone from global. This could block cleanup of those segments.

    • Fix an issue that could cause event redaction tasks to fail to complete, if a segment having events redacted was deleted due to retention.

    • Fix an issue causing a content-length check for bucket uploads to fail when encryption was enabled. The content-length check is not normally enabled, so this should only affect clusters that have disabled ETag-based validation.

    • Fixed an issue where LogScale could log secrets to the debug log when configured to use LDAP or when configured to use SSL for Kafka.

Known Issues

Falcon LogScale 1.63.4 LTS (2023-02-01)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.63.4LTS2023-02-01

Cloud

2023-11-30No1.30.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.63.4/server-1.63.4.tar.gz

These notes include entries from the following previous releases: 1.63.1, 1.63.2, 1.63.3

Bug fixes and updates.

Removed

Items that have been removed as of this release.

Installation and Deployment

  • Deprecated feature removal: the file-based backup feature was deprecated in 1.42.0 and is now removed from Humio. The following configs are no longer supported and will do nothing if set:

    The DELETE_BACKUP_AFTER_MILLIS config is still supported, as it is used for configuring the delay between a file being marked for deletion in Humio, and that file being removed from bucket storage.

New features and improvements

  • Security

    • The version of Jackson has been upgraded to address CVE-2022-42003 vulnerability.

  • Falcon Data Replicator

  • UI Changes

    • Humio is now a Falcon product. The Humio owl logo and icons are therefore replaced by beautiful falcons.

    • Change Humio logo to Falcon LogScale on login and signup pages.

    • Interactions on JSON data now enabled for JSON arrays in the Event List.

    • Parsing JSON arrays in drill-down context menus no longer adds a trailing dot to the prefix field name.

    • The Single Value widget has updated properties:

      • New design for the toggle switch: it is now bigger and has a green/gray color profile instead of blue/gray.

      • The color profile of the displayed value by trend is now customizable.

    • Following its name change, mentions of Humio have been changed to Falcon LogScale.

    • Add Falcon LogScale announcement on login and signup pages.

    • Contextual drill-down menus for field interactions have been introduced, see Field Interactions. In particular:

      • Fields in the Inspection Panel are now provided with Drill down and Copy context menu items, replacing the former + - GroupBy buttons, see updates at Inspecting Events.

      • The Fields Panel on the left-hand side of the User Interface is now provided with Drill down and Copy context menu items, replacing the former drill-down buttons in the field details flyout (when clicking a field in the fields menu). See updates at Displaying Fields.

      • Fields that have JSON, URL and Timestamps content will have a Parse drill-down option which will parse the field as a LogScale field.

        Parsing JSON will automatically use the field name as prefix for the new field name.

      • Fields containing numbers (currently JSON only) will have Sum, Max, Min, Max values, and Percentiles drill-down options.

  • Automation and Alerts

    • Added two new message templates to actions, {query_start_s} and {query_end_s}. See Message Templates and Variables for details.

    • Self-hosted only: the old implementation of how alert queries are run has been removed. As a consequence, the dynamic configuration UseLegacyAlertJob has also been removed.

  • GraphQL API

  • Dashboards and Widgets

    • JSON in Log Line and JSON formats columns in Event List widgets now have fields underlined on hover and are clickable. This allows drill-downs and copying values easily.

  • Functions

    • QueryAPI — Added staticMetaData property to QueryJobStartedResult. At the moment it only contains the property executionMode, which can be used to communicate hints about the way the backend executes the query to the front-end.

    • Improved the format():

      • Fixed an issue where the format() function would output the wrong amount of left padded zeros for decimal conversions.

      • Formatting large positive numbers as hex no longer causes a loss of bits for integers less than 2^63.

      • Formatting negative numbers as hex no longer produces unintelligible strings.

      • Fixed an issue where adding the # flag would not display the correct formatting string.

      • Fixed an issue where specifying the time/date modifier N would fail to parse.

      • Fixed an issue where supplying multiple fields required you to specify the index of the last field as an argument specifier.

      • Added a length specifier to allow for outputting fields as 32-bit integers instead of 64-bits.

      • Using the type specifier %F now tries to format the specified field as a floating point.

      See the format() reference documentation page for all the above mentioned updates on the supported formatting syntax.

    • QueryAPI — executionModeHint renamed to executionMode.

    • Introduced new valid array syntax in array:contains() and array:regex() functions:

      • Changed the expected format of the array parameter.

      • Changed these functions to no longer be experimental.

  • Other

    • Add code to ensure all mini-segments for the same target end up located on the same hosts. A change in 1.63 could create a situation where mini-segments for the same merge target wound up on different nodes, which the query code currently assumes can't happen. This could cause Result is partial responses to user queries.

    • When selecting a parser test case, the selected test case is highlighted in the UI, so you can see what is selected.

    • Added a new dynamic configuration UndersizedMergingRetentionPercentage, with a default value of 20. This configuration value is used when selecting undersized segments to merge, this setting controls how wide a time span can be merged together.

      The setting is interpreted as a percentage of the repository's retention by time setting. A reasonable range is 0 through to 90.

    • New background task that runs at startup. It verifies the checksums present in local segment files, traversing the most recently updated segment files on the local disk, using the timestamps they have when Humio status. If a file has invalid checksum it will be renamed to crc-error.X where X is the ID of the segment. An error will be logged as well.

    • Add an additional validation check when uploading files to S3-like bucket storage. Humio will now perform a HEAD request for the file's final location in the bucket to verify that the upload succeeded.

    • Added use of the HTTP Proxy Client Configuration, if configured, in a lot of places.

    • Add a script in the tarball distribution's bin directory to check the execution environment, checking common permission issues and other requirements for an environment suitable for running LogScale.

    • Added a new ingest endpoint for receiving metrics and traces via OpenTelemetry OTLP/http. See Ingesting with OpenTelemetry for all the details.

    • Empty datasource directories will be now removed from the local file system while starting the server.

    • Created new test function for event forwarders, which takes as input an event forwarder configuration and tests whether it is possible to connect to the Kafka server. The current test function which takes an ID as input and tests an existing event forwarder by ID, is now marked as deprecated.

    • Use latest version of Java 17 in Docker images.

    • It is now possible to expand multiple bell notifications.

Fixed in this release

  • Security

    • Update Netty to address CVE-2022-41915.

  • UI Changes

    • URL paths with repository name and no trailing /search resolved to Not Found. The URL /repoName will now again show the search page for the repoName repository.

    • Change missing @timestamp field to give a warning instead of an error in functions tail(), head(), bucket(), and timeChart().

  • Automation and Alerts

    • Fixed a bug where a link in the notification for a failed alert would link to a non-existing page.

  • Configuration

  • Dashboards and Widgets

    • Fixed a bug where query result containing no valid results was handled incorrectly in visualisation.

    • Bug fixed in Scatter Chart widget tooltip, so that the description of the actual point only is shown in the tooltip when hovering the mouse over one point, instead of multiple points.

  • Functions

    • Fixed an issue where match() would sometimes give errors when ignoreCase=true and events contained latin1 encoded characters.

    • Fixed an issue where NaN values could cause groupBy() queries to fail.

    • Fixed a bug where the selfJoin() function would not apply the postfilter parameter.

  • Other

    • Unlimited waits for nodes to get in sync has been fixed. This caused digest coordination to fail, to limit the time allowed for a node to get "in sync" on a partition before leadership was assigned to it, in cases where the previous digest leader shut down gracefully.

    • Fixed an issue where nothing was displayed on the average ingest chart in case only one datapoint is present.

    • It is now possible for a user to use the same personal invite token after the user has been transferred to another organization.

    • When selecting a parser test case, the selected test case is highlighted in the UI, so you can see what is selected.

    • Fixed a bug in decryption code used when decrypting downloaded files from bucket storage when version-for-bucket-writes=3. The bug did not allow to decrypt files larger than 2GB.

    • Fixed a regression causing a reflective method lookup to fail when Humio is running on a Java prior to 13.

    • When a host is removed from global, a job tries to clean up any references to it from other places in global, such as segments. Fixed a bug in this job that meant it didn't clean up references on segments that were tombstoned but not yet gone from global. This could block cleanup of those segments.

    • Fix an issue that could cause event redaction tasks to fail to complete, if a segment having events redacted was deleted due to retention.

    • Fix an issue causing a content-length check for bucket uploads to fail when encryption was enabled. The content-length check is not normally enabled, so this should only affect clusters that have disabled ETag-based validation.

    • Fixed an issue where LogScale could log secrets to the debug log when configured to use LDAP or when configured to use SSL for Kafka.

Known Issues

Falcon LogScale 1.63.3 LTS (2022-12-21)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.63.3LTS2022-12-21

Cloud

2023-11-30No1.30.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.63.3/server-1.63.3.tar.gz

These notes include entries from the following previous releases: 1.63.1, 1.63.2

Bug fixes and updates.

Removed

Items that have been removed as of this release.

Installation and Deployment

  • Deprecated feature removal: the file-based backup feature was deprecated in 1.42.0 and is now removed from Humio. The following configs are no longer supported and will do nothing if set:

    The DELETE_BACKUP_AFTER_MILLIS config is still supported, as it is used for configuring the delay between a file being marked for deletion in Humio, and that file being removed from bucket storage.

New features and improvements

  • Security

    • The version of Jackson has been upgraded to address CVE-2022-42003 vulnerability.

  • Falcon Data Replicator

  • UI Changes

    • Humio is now a Falcon product. The Humio owl logo and icons are therefore replaced by beautiful falcons.

    • Change Humio logo to Falcon LogScale on login and signup pages.

    • Interactions on JSON data now enabled for JSON arrays in the Event List.

    • Parsing JSON arrays in drill-down context menus no longer adds a trailing dot to the prefix field name.

    • The Single Value widget has updated properties:

      • New design for the toggle switch: it is now bigger and has a green/gray color profile instead of blue/gray.

      • The color profile of the displayed value by trend is now customizable.

    • Following its name change, mentions of Humio have been changed to Falcon LogScale.

    • Add Falcon LogScale announcement on login and signup pages.

    • Contextual drill-down menus for field interactions have been introduced, see Field Interactions. In particular:

      • Fields in the Inspection Panel are now provided with Drill down and Copy context menu items, replacing the former + - GroupBy buttons, see updates at Inspecting Events.

      • The Fields Panel on the left-hand side of the User Interface is now provided with Drill down and Copy context menu items, replacing the former drill-down buttons in the field details flyout (when clicking a field in the fields menu). See updates at Displaying Fields.

      • Fields that have JSON, URL and Timestamps content will have a Parse drill-down option which will parse the field as a LogScale field.

        Parsing JSON will automatically use the field name as prefix for the new field name.

      • Fields containing numbers (currently JSON only) will have Sum, Max, Min, Max values, and Percentiles drill-down options.

  • Automation and Alerts

    • Added two new message templates to actions, {query_start_s} and {query_end_s}. See Message Templates and Variables for details.

    • Self-hosted only: the old implementation of how alert queries are run has been removed. As a consequence, the dynamic configuration UseLegacyAlertJob has also been removed.

  • GraphQL API

  • Dashboards and Widgets

    • JSON in Log Line and JSON formats columns in Event List widgets now have fields underlined on hover and are clickable. This allows drill-downs and copying values easily.

  • Functions

    • QueryAPI — Added staticMetaData property to QueryJobStartedResult. At the moment it only contains the property executionMode, which can be used to communicate hints about the way the backend executes the query to the front-end.

    • Improved the format():

      • Fixed an issue where the format() function would output the wrong amount of left padded zeros for decimal conversions.

      • Formatting large positive numbers as hex no longer causes a loss of bits for integers less than 2^63.

      • Formatting negative numbers as hex no longer produces unintelligible strings.

      • Fixed an issue where adding the # flag would not display the correct formatting string.

      • Fixed an issue where specifying the time/date modifier N would fail to parse.

      • Fixed an issue where supplying multiple fields required you to specify the index of the last field as an argument specifier.

      • Added a length specifier to allow for outputting fields as 32-bit integers instead of 64-bits.

      • Using the type specifier %F now tries to format the specified field as a floating point.

      See the format() reference documentation page for all the above mentioned updates on the supported formatting syntax.

    • QueryAPI — executionModeHint renamed to executionMode.

    • Introduced new valid array syntax in array:contains() and array:regex() functions:

      • Changed the expected format of the array parameter.

      • Changed these functions to no longer be experimental.

  • Other

    • Add code to ensure all mini-segments for the same target end up located on the same hosts. A change in 1.63 could create a situation where mini-segments for the same merge target wound up on different nodes, which the query code currently assumes can't happen. This could cause Result is partial responses to user queries.

    • When selecting a parser test case, the selected test case is highlighted in the UI, so you can see what is selected.

    • Added a new dynamic configuration UndersizedMergingRetentionPercentage, with a default value of 20. This configuration value is used when selecting undersized segments to merge, this setting controls how wide a time span can be merged together.

      The setting is interpreted as a percentage of the repository's retention by time setting. A reasonable range is 0 through to 90.

    • New background task that runs at startup. It verifies the checksums present in local segment files, traversing the most recently updated segment files on the local disk, using the timestamps they have when Humio status. If a file has invalid checksum it will be renamed to crc-error.X where X is the ID of the segment. An error will be logged as well.

    • Add an additional validation check when uploading files to S3-like bucket storage. Humio will now perform a HEAD request for the file's final location in the bucket to verify that the upload succeeded.

    • Added use of the HTTP Proxy Client Configuration, if configured, in a lot of places.

    • Add a script in the tarball distribution's bin directory to check the execution environment, checking common permission issues and other requirements for an environment suitable for running LogScale.

    • Added a new ingest endpoint for receiving metrics and traces via OpenTelemetry OTLP/http. See Ingesting with OpenTelemetry for all the details.

    • Empty datasource directories will be now removed from the local file system while starting the server.

    • Created new test function for event forwarders, which takes as input an event forwarder configuration and tests whether it is possible to connect to the Kafka server. The current test function which takes an ID as input and tests an existing event forwarder by ID, is now marked as deprecated.

    • Use latest version of Java 17 in Docker images.

    • It is now possible to expand multiple bell notifications.

Fixed in this release

  • Security

    • Update Netty to address CVE-2022-41915.

  • UI Changes

    • URL paths with repository name and no trailing /search resolved to Not Found. The URL /repoName will now again show the search page for the repoName repository.

    • Change missing @timestamp field to give a warning instead of an error in functions tail(), head(), bucket(), and timeChart().

  • Automation and Alerts

    • Fixed a bug where a link in the notification for a failed alert would link to a non-existing page.

  • Dashboards and Widgets

    • Fixed a bug where query result containing no valid results was handled incorrectly in visualisation.

    • Bug fixed in Scatter Chart widget tooltip, so that the description of the actual point only is shown in the tooltip when hovering the mouse over one point, instead of multiple points.

  • Functions

    • Fixed an issue where match() would sometimes give errors when ignoreCase=true and events contained latin1 encoded characters.

    • Fixed an issue where NaN values could cause groupBy() queries to fail.

    • Fixed a bug where the selfJoin() function would not apply the postfilter parameter.

  • Other

    • Fixed an issue where nothing was displayed on the average ingest chart in case only one datapoint is present.

    • It is now possible for a user to use the same personal invite token after the user has been transferred to another organization.

    • When selecting a parser test case, the selected test case is highlighted in the UI, so you can see what is selected.

    • Fixed a bug in decryption code used when decrypting downloaded files from bucket storage when version-for-bucket-writes=3. The bug did not allow to decrypt files larger than 2GB.

    • Fixed a regression causing a reflective method lookup to fail when Humio is running on a Java prior to 13.

    • When a host is removed from global, a job tries to clean up any references to it from other places in global, such as segments. Fixed a bug in this job that meant it didn't clean up references on segments that were tombstoned but not yet gone from global. This could block cleanup of those segments.

    • Fix an issue that could cause event redaction tasks to fail to complete, if a segment having events redacted was deleted due to retention.

    • Fix an issue causing a content-length check for bucket uploads to fail when encryption was enabled. The content-length check is not normally enabled, so this should only affect clusters that have disabled ETag-based validation.

    • Fixed an issue where LogScale could log secrets to the debug log when configured to use LDAP or when configured to use SSL for Kafka.

Known Issues

Falcon LogScale 1.63.2 LTS (2022-11-30)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.63.2LTS2022-11-30

Cloud

2023-11-30No1.30.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.63.2/server-1.63.2.tar.gz

These notes include entries from the following previous releases: 1.63.1

Bug fixes and updates.

Removed

Items that have been removed as of this release.

Installation and Deployment

  • Deprecated feature removal: the file-based backup feature was deprecated in 1.42.0 and is now removed from Humio. The following configs are no longer supported and will do nothing if set:

    The DELETE_BACKUP_AFTER_MILLIS config is still supported, as it is used for configuring the delay between a file being marked for deletion in Humio, and that file being removed from bucket storage.

New features and improvements

  • Security

    • The version of Jackson has been upgraded to address CVE-2022-42003 vulnerability.

  • Falcon Data Replicator

  • UI Changes

    • Humio is now a Falcon product. The Humio owl logo and icons are therefore replaced by beautiful falcons.

    • Change Humio logo to Falcon LogScale on login and signup pages.

    • Interactions on JSON data now enabled for JSON arrays in the Event List.

    • Parsing JSON arrays in drill-down context menus no longer adds a trailing dot to the prefix field name.

    • The Single Value widget has updated properties:

      • New design for the toggle switch: it is now bigger and has a green/gray color profile instead of blue/gray.

      • The color profile of the displayed value by trend is now customizable.

    • Following its name change, mentions of Humio have been changed to Falcon LogScale.

    • Add Falcon LogScale announcement on login and signup pages.

    • Contextual drill-down menus for field interactions have been introduced, see Field Interactions. In particular:

      • Fields in the Inspection Panel are now provided with Drill down and Copy context menu items, replacing the former + - GroupBy buttons, see updates at Inspecting Events.

      • The Fields Panel on the left-hand side of the User Interface is now provided with Drill down and Copy context menu items, replacing the former drill-down buttons in the field details flyout (when clicking a field in the fields menu). See updates at Displaying Fields.

      • Fields that have JSON, URL and Timestamps content will have a Parse drill-down option which will parse the field as a LogScale field.

        Parsing JSON will automatically use the field name as prefix for the new field name.

      • Fields containing numbers (currently JSON only) will have Sum, Max, Min, Max values, and Percentiles drill-down options.

  • Automation and Alerts

    • Added two new message templates to actions, {query_start_s} and {query_end_s}. See Message Templates and Variables for details.

    • Self-hosted only: the old implementation of how alert queries are run has been removed. As a consequence, the dynamic configuration UseLegacyAlertJob has also been removed.

  • GraphQL API

  • Dashboards and Widgets

    • JSON in Log Line and JSON formats columns in Event List widgets now have fields underlined on hover and are clickable. This allows drill-downs and copying values easily.

  • Functions

    • QueryAPI — Added staticMetaData property to QueryJobStartedResult. At the moment it only contains the property executionMode, which can be used to communicate hints about the way the backend executes the query to the front-end.

    • Improved the format():

      • Fixed an issue where the format() function would output the wrong amount of left padded zeros for decimal conversions.

      • Formatting large positive numbers as hex no longer causes a loss of bits for integers less than 2^63.

      • Formatting negative numbers as hex no longer produces unintelligible strings.

      • Fixed an issue where adding the # flag would not display the correct formatting string.

      • Fixed an issue where specifying the time/date modifier N would fail to parse.

      • Fixed an issue where supplying multiple fields required you to specify the index of the last field as an argument specifier.

      • Added a length specifier to allow for outputting fields as 32-bit integers instead of 64-bits.

      • Using the type specifier %F now tries to format the specified field as a floating point.

      See the format() reference documentation page for all the above mentioned updates on the supported formatting syntax.

    • QueryAPI — executionModeHint renamed to executionMode.

    • Introduced new valid array syntax in array:contains() and array:regex() functions:

      • Changed the expected format of the array parameter.

      • Changed these functions to no longer be experimental.

  • Other

    • When selecting a parser test case, the selected test case is highlighted in the UI, so you can see what is selected.

    • Added a new dynamic configuration UndersizedMergingRetentionPercentage, with a default value of 20. This configuration value is used when selecting undersized segments to merge, this setting controls how wide a time span can be merged together.

      The setting is interpreted as a percentage of the repository's retention by time setting. A reasonable range is 0 through to 90.

    • New background task that runs at startup. It verifies the checksums present in local segment files, traversing the most recently updated segment files on the local disk, using the timestamps they have when Humio status. If a file has invalid checksum it will be renamed to crc-error.X where X is the ID of the segment. An error will be logged as well.

    • Add an additional validation check when uploading files to S3-like bucket storage. Humio will now perform a HEAD request for the file's final location in the bucket to verify that the upload succeeded.

    • Added use of the HTTP Proxy Client Configuration, if configured, in a lot of places.

    • Add a script in the tarball distribution's bin directory to check the execution environment, checking common permission issues and other requirements for an environment suitable for running LogScale.

    • Added a new ingest endpoint for receiving metrics and traces via OpenTelemetry OTLP/http. See Ingesting with OpenTelemetry for all the details.

    • Empty datasource directories will be now removed from the local file system while starting the server.

    • Created new test function for event forwarders, which takes as input an event forwarder configuration and tests whether it is possible to connect to the Kafka server. The current test function which takes an ID as input and tests an existing event forwarder by ID, is now marked as deprecated.

    • Use latest version of Java 17 in Docker images.

    • It is now possible to expand multiple bell notifications.

Fixed in this release

  • UI Changes

    • URL paths with repository name and no trailing /search resolved to Not Found. The URL /repoName will now again show the search page for the repoName repository.

    • Change missing @timestamp field to give a warning instead of an error in functions tail(), head(), bucket(), and timeChart().

  • Dashboards and Widgets

    • Fixed a bug where query result containing no valid results was handled incorrectly in visualisation.

    • Bug fixed in Scatter Chart widget tooltip, so that the description of the actual point only is shown in the tooltip when hovering the mouse over one point, instead of multiple points.

  • Functions

    • Fixed an issue where match() would sometimes give errors when ignoreCase=true and events contained latin1 encoded characters.

    • Fixed an issue where NaN values could cause groupBy() queries to fail.

    • Fixed a bug where the selfJoin() function would not apply the postfilter parameter.

  • Other

    • Fixed an issue where nothing was displayed on the average ingest chart in case only one datapoint is present.

    • It is now possible for a user to use the same personal invite token after the user has been transferred to another organization.

    • When selecting a parser test case, the selected test case is highlighted in the UI, so you can see what is selected.

    • Fixed a regression causing a reflective method lookup to fail when Humio is running on a Java prior to 13.

    • When a host is removed from global, a job tries to clean up any references to it from other places in global, such as segments. Fixed a bug in this job that meant it didn't clean up references on segments that were tombstoned but not yet gone from global. This could block cleanup of those segments.

    • Fix an issue that could cause event redaction tasks to fail to complete, if a segment having events redacted was deleted due to retention.

    • Fix an issue causing a content-length check for bucket uploads to fail when encryption was enabled. The content-length check is not normally enabled, so this should only affect clusters that have disabled ETag-based validation.

Known Issues

Falcon LogScale 1.63.1 LTS (2022-11-14)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.63.1LTS2022-11-14

Cloud

2023-11-30No1.30.0No

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.63.1/server-1.63.1.tar.gz

Bug fixes and updates.

Removed

Items that have been removed as of this release.

Installation and Deployment

  • Deprecated feature removal: the file-based backup feature was deprecated in 1.42.0 and is now removed from Humio. The following configs are no longer supported and will do nothing if set:

    The DELETE_BACKUP_AFTER_MILLIS config is still supported, as it is used for configuring the delay between a file being marked for deletion in Humio, and that file being removed from bucket storage.

New features and improvements

  • Security

    • The version of Jackson has been upgraded to address CVE-2022-42003 vulnerability.

  • Falcon Data Replicator

  • UI Changes

    • Humio is now a Falcon product. The Humio owl logo and icons are therefore replaced by beautiful falcons.

    • Change Humio logo to Falcon LogScale on login and signup pages.

    • Interactions on JSON data now enabled for JSON arrays in the Event List.

    • Parsing JSON arrays in drill-down context menus no longer adds a trailing dot to the prefix field name.

    • The Single Value widget has updated properties:

      • New design for the toggle switch: it is now bigger and has a green/gray color profile instead of blue/gray.

      • The color profile of the displayed value by trend is now customizable.

    • Following its name change, mentions of Humio have been changed to Falcon LogScale.

    • Add Falcon LogScale announcement on login and signup pages.

    • Contextual drill-down menus for field interactions have been introduced, see Field Interactions. In particular:

      • Fields in the Inspection Panel are now provided with Drill down and Copy context menu items, replacing the former + - GroupBy buttons, see updates at Inspecting Events.

      • The Fields Panel on the left-hand side of the User Interface is now provided with Drill down and Copy context menu items, replacing the former drill-down buttons in the field details flyout (when clicking a field in the fields menu). See updates at Displaying Fields.

      • Fields that have JSON, URL and Timestamps content will have a Parse drill-down option which will parse the field as a LogScale field.

        Parsing JSON will automatically use the field name as prefix for the new field name.

      • Fields containing numbers (currently JSON only) will have Sum, Max, Min, Max values, and Percentiles drill-down options.

  • Automation and Alerts

    • Added two new message templates to actions, {query_start_s} and {query_end_s}. See Message Templates and Variables for details.

    • Self-hosted only: the old implementation of how alert queries are run has been removed. As a consequence, the dynamic configuration UseLegacyAlertJob has also been removed.

  • GraphQL API

  • Dashboards and Widgets

    • JSON in Log Line and JSON formats columns in Event List widgets now have fields underlined on hover and are clickable. This allows drill-downs and copying values easily.

  • Functions

    • QueryAPI — Added staticMetaData property to QueryJobStartedResult. At the moment it only contains the property executionMode, which can be used to communicate hints about the way the backend executes the query to the front-end.

    • Improved the format():

      • Fixed an issue where the format() function would output the wrong amount of left padded zeros for decimal conversions.

      • Formatting large positive numbers as hex no longer causes a loss of bits for integers less than 2^63.

      • Formatting negative numbers as hex no longer produces unintelligible strings.

      • Fixed an issue where adding the # flag would not display the correct formatting string.

      • Fixed an issue where specifying the time/date modifier N would fail to parse.

      • Fixed an issue where supplying multiple fields required you to specify the index of the last field as an argument specifier.

      • Added a length specifier to allow for outputting fields as 32-bit integers instead of 64-bits.

      • Using the type specifier %F now tries to format the specified field as a floating point.

      See the format() reference documentation page for all the above mentioned updates on the supported formatting syntax.

    • QueryAPI — executionModeHint renamed to executionMode.

    • Introduced new valid array syntax in array:contains() and array:regex() functions:

      • Changed the expected format of the array parameter.

      • Changed these functions to no longer be experimental.

  • Other

    • When selecting a parser test case, the selected test case is highlighted in the UI, so you can see what is selected.

    • Added a new dynamic configuration UndersizedMergingRetentionPercentage, with a default value of 20. This configuration value is used when selecting undersized segments to merge, this setting controls how wide a time span can be merged together.

      The setting is interpreted as a percentage of the repository's retention by time setting. A reasonable range is 0 through to 90.

    • New background task that runs at startup. It verifies the checksums present in local segment files, traversing the most recently updated segment files on the local disk, using the timestamps they have when Humio status. If a file has invalid checksum it will be renamed to crc-error.X where X is the ID of the segment. An error will be logged as well.

    • Add an additional validation check when uploading files to S3-like bucket storage. Humio will now perform a HEAD request for the file's final location in the bucket to verify that the upload succeeded.

    • Added use of the HTTP Proxy Client Configuration, if configured, in a lot of places.

    • Add a script in the tarball distribution's bin directory to check the execution environment, checking common permission issues and other requirements for an environment suitable for running LogScale.

    • Added a new ingest endpoint for receiving metrics and traces via OpenTelemetry OTLP/http. See Ingesting with OpenTelemetry for all the details.

    • Empty datasource directories will be now removed from the local file system while starting the server.

    • Created new test function for event forwarders, which takes as input an event forwarder configuration and tests whether it is possible to connect to the Kafka server. The current test function which takes an ID as input and tests an existing event forwarder by ID, is now marked as deprecated.

    • Use latest version of Java 17 in Docker images.

    • It is now possible to expand multiple bell notifications.

Fixed in this release

  • UI Changes

  • Dashboards and Widgets

    • Fixed a bug where query result containing no valid results was handled incorrectly in visualisation.

    • Bug fixed in Scatter Chart widget tooltip, so that the description of the actual point only is shown in the tooltip when hovering the mouse over one point, instead of multiple points.

  • Functions

    • Fixed an issue where match() would sometimes give errors when ignoreCase=true and events contained latin1 encoded characters.

    • Fixed an issue where NaN values could cause groupBy() queries to fail.

    • Fixed a bug where the selfJoin() function would not apply the postfilter parameter.

  • Other

    • Fixed an issue where nothing was displayed on the average ingest chart in case only one datapoint is present.

    • It is now possible for a user to use the same personal invite token after the user has been transferred to another organization.

    • When selecting a parser test case, the selected test case is highlighted in the UI, so you can see what is selected.

    • Fixed a regression causing a reflective method lookup to fail when Humio is running on a Java prior to 13.

    • When a host is removed from global, a job tries to clean up any references to it from other places in global, such as segments. Fixed a bug in this job that meant it didn't clean up references on segments that were tombstoned but not yet gone from global. This could block cleanup of those segments.

    • Fix an issue that could cause event redaction tasks to fail to complete, if a segment having events redacted was deleted due to retention.

    • Fix an issue causing a content-length check for bucket uploads to fail when encryption was enabled. The content-length check is not normally enabled, so this should only affect clusters that have disabled ETag-based validation.

Known Issues

Falcon LogScale 1.63.0 GA (2022-10-25)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.63.0GA2022-10-25

Cloud

2023-11-30No1.30.0No

Available for download two days after release.

Bug fixes and updates.

New features and improvements

  • GraphQL API

    • Added enableEventForwarder and disableEventForwarder mutations to enable/disable event forwarders.

  • Log Collector

    • LogScale Collector download page has moved into the new top level tab Falcon Log Collector Manage your Fleet (Cloud-only).

    • Humio Log Collector is now Falcon LogScale Collector.

    • New FleetOverview functionality for the LogScale Collector 1.2.0 is available.

  • Functions

    • The holtwinters() query function will be deprecated with the release of future version 1.68. From then, it cannot be expected to work in alerts, and it will be removed entirely with the release of version 1.72.

    • The base64Decode() query function now accepts non-canonical encodings.

    • The parseCsv() function has improved its performance, in particular in terms of memory pressure.

  • Other

    • Close all segments a node is working on when shutting down. This should help start later in Kafka after reboots.

Fixed in this release

  • Other

      • Fixed an issue with validations when creating a new Ingest Listener as Netflow/UDP.

      • The form validation for Ingest Listener will now clearly tell the user that the parser needs to be selected when you change between different protocols.

    • Fixed a race condition where the segment top offset wasn't removed when a datasource went idle due to a race. This could result in event redaction not running for such segments.

Falcon LogScale 1.62.0 GA (2022-10-18)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.62.0GA2022-10-18

Cloud

2023-11-30No1.30.0No

Available for download two days after release.

Updates.

New features and improvements

  • UI Changes

    • Change Humio logo to Falcon LogScale on login and signup pages.

    • Parsing JSON arrays in drill-down context menus no longer adds a trailing dot to the prefix field name.

    • Following its name change, mentions of Humio have been changed to Falcon LogScale.

    • Add Falcon LogScale announcement on login and signup pages.

  • Functions

    • Introduced new valid array syntax in array:contains() and array:regex() functions:

      • Changed the expected format of the array parameter.

      • Changed these functions to no longer be experimental.

  • Other

    • Add a script in the tarball distribution's bin directory to check the execution environment, checking common permission issues and other requirements for an environment suitable for running LogScale.

    • Added a new ingest endpoint for receiving metrics and traces via OpenTelemetry OTLP/http. See Ingesting with OpenTelemetry for all the details.

Humio Server 1.61.0 GA (2022-10-11)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.61.0GA2022-10-11

Cloud

2023-11-30No1.30.0No

Available for download two days after release.

Bug fixes and updates.

New features and improvements

  • Falcon Data Replicator

  • UI Changes

    • Interactions on JSON data now enabled for JSON arrays in the Event List.

  • Functions

    • QueryAPI — Added staticMetaData property to QueryJobStartedResult. At the moment it only contains the property executionMode, which can be used to communicate hints about the way the backend executes the query to the front-end.

    • QueryAPI — executionModeHint renamed to executionMode.

Fixed in this release

  • Functions

  • Other

    • Fixed an issue where nothing was displayed on the average ingest chart in case only one datapoint is present.

    • Fixed a regression causing a reflective method lookup to fail when Humio is running on a Java prior to 13.

    • Fix an issue that could cause event redaction tasks to fail to complete, if a segment having events redacted was deleted due to retention.

Humio Server 1.60.0 GA (2022-10-04)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.60.0GA2022-10-04

Cloud

2023-11-30No1.30.0No

Available for download two days after release.

Bug fixes and updates.

New features and improvements

  • UI Changes

    • Contextual drill-down menus for field interactions have been introduced, see Field Interactions. In particular:

      • Fields in the Inspection Panel are now provided with Drill down and Copy context menu items, replacing the former + - GroupBy buttons, see updates at Inspecting Events.

      • The Fields Panel on the left-hand side of the User Interface is now provided with Drill down and Copy context menu items, replacing the former drill-down buttons in the field details flyout (when clicking a field in the fields menu). See updates at Displaying Fields.

      • Fields that have JSON, URL and Timestamps content will have a Parse drill-down option which will parse the field as a LogScale field.

        Parsing JSON will automatically use the field name as prefix for the new field name.

      • Fields containing numbers (currently JSON only) will have Sum, Max, Min, Max values, and Percentiles drill-down options.

  • GraphQL API

  • Dashboards and Widgets

    • JSON in Log Line and JSON formats columns in Event List widgets now have fields underlined on hover and are clickable. This allows drill-downs and copying values easily.

  • Other

    • New background task that runs at startup. It verifies the checksums present in local segment files, traversing the most recently updated segment files on the local disk, using the timestamps they have when Humio status. If a file has invalid checksum it will be renamed to crc-error.X where X is the ID of the segment. An error will be logged as well.

    • Use latest version of Java 17 in Docker images.

    • It is now possible to expand multiple bell notifications.

Fixed in this release

  • Dashboards and Widgets

    • Fixed a bug where query result containing no valid results was handled incorrectly in visualisation.

  • Functions

    • Fixed an issue where NaN values could cause groupBy() queries to fail.

Humio Server 1.59.0 GA (2022-09-27)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.59.0GA2022-09-27

Cloud

2023-11-30No1.30.0No

Available for download two days after release.

Updates.

New features and improvements

  • UI Changes

    • The Single Value widget has updated properties:

      • New design for the toggle switch: it is now bigger and has a green/gray color profile instead of blue/gray.

      • The color profile of the displayed value by trend is now customizable.

  • Automation and Alerts

Humio Server 1.58.0 GA (2022-09-20)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.58.0GA2022-09-20

Cloud

2023-11-30No1.30.0No

Available for download two days after release.

Bug fixes and updates.

New features and improvements

  • UI Changes

    • Humio is now a Falcon product. The Humio owl logo and icons are therefore replaced by beautiful falcons.

  • Functions

    • Improved the format():

      • Fixed an issue where the format() function would output the wrong amount of left padded zeros for decimal conversions.

      • Formatting large positive numbers as hex no longer causes a loss of bits for integers less than 2^63.

      • Formatting negative numbers as hex no longer produces unintelligible strings.

      • Fixed an issue where adding the # flag would not display the correct formatting string.

      • Fixed an issue where specifying the time/date modifier N would fail to parse.

      • Fixed an issue where supplying multiple fields required you to specify the index of the last field as an argument specifier.

      • Added a length specifier to allow for outputting fields as 32-bit integers instead of 64-bits.

      • Using the type specifier %F now tries to format the specified field as a floating point.

      See the format() reference documentation page for all the above mentioned updates on the supported formatting syntax.

  • Other

    • Add an additional validation check when uploading files to S3-like bucket storage. Humio will now perform a HEAD request for the file's final location in the bucket to verify that the upload succeeded.

    • Empty datasource directories will be now removed from the local file system while starting the server.

Fixed in this release

  • UI Changes

  • Other

    • Fix an issue causing a content-length check for bucket uploads to fail when encryption was enabled. The content-length check is not normally enabled, so this should only affect clusters that have disabled ETag-based validation.

    • Fix a regression introduced in 1.46.0 that can cause Humio to fail to properly replay data from Kafka when a node is restarted.

Humio Server 1.57.0 GA (2022-09-13)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.57.0GA2022-09-13

Cloud

2023-11-30No1.30.0No

Available for download two days after release.

Bug fixes and updates.

Removed

Items that have been removed as of this release.

Installation and Deployment

  • Deprecated feature removal: the file-based backup feature was deprecated in 1.42.0 and is now removed from Humio. The following configs are no longer supported and will do nothing if set:

    The DELETE_BACKUP_AFTER_MILLIS config is still supported, as it is used for configuring the delay between a file being marked for deletion in Humio, and that file being removed from bucket storage.

New features and improvements

  • UI Changes

    • Humio is now a Falcon product. The Humio owl logo and icons are therefore replaced by beautiful falcons.

  • Other

    • When selecting a parser test case, the selected test case is highlighted in the UI, so you can see what is selected.

    • Added use of the HTTP Proxy Client Configuration, if configured, in a lot of places.

    • Created new test function for event forwarders, which takes as input an event forwarder configuration and tests whether it is possible to connect to the Kafka server. The current test function which takes an ID as input and tests an existing event forwarder by ID, is now marked as deprecated.

Fixed in this release

  • Dashboards and Widgets

    • Bug fixed in Scatter Chart widget tooltip, so that the description of the actual point only is shown in the tooltip when hovering the mouse over one point, instead of multiple points.

  • Functions

    • Fixed an issue where match() would sometimes give errors when ignoreCase=true and events contained latin1 encoded characters.

  • Other

    • It is now possible for a user to use the same personal invite token after the user has been transferred to another organization.

    • When selecting a parser test case, the selected test case is highlighted in the UI, so you can see what is selected.

    • Fixed an issue where the HTTP threads (Akka pool) could get blocked while sending ingest requests to Kafka, which could result in Humio HTTP endpoints not responding.

Humio Server 1.56.4 LTS (2022-12-21)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.56.4LTS2022-12-21

Cloud

2023-09-30No1.30.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.56.4/server-1.56.4.tar.gz

These notes include entries from the following previous releases: 1.56.2, 1.56.3

Bug fixes and updates.

New features and improvements

  • Security

    • The version of Jackson has been upgraded to address CVE-2022-42003 vulnerability.

  • Falcon Data Replicator

    • The feature flag for FDR feeds has been removed. FDR feeds are now generally available.

  • UI Changes

    • The event lists column header menus have been redesigned to be simpler:

      • You can now click the border between columns header in the event to fit the column to the content.

      • The Event List column Format Panel has been updated to make it easier to manage columns.

      See Formatting Columns.

    • It is now possible to interact directly with the JSON properties and values in the EventList.

    • In the Event List you can assign data types to a column field. You can now make the setting the default for a fields and the setting is remembered when even the field is added to the Event List, e.g. from the fields panel on the Search page. The button for assigning default data type to a field can be found in the Data type dropdown menu in the column headers of the event list widget. See Field Data Types.

    • It is now possible to scroll to the selected event on the Search page.

    • Add UI for enabling and disabling social logins on the identity providers page.

    • The Log line format type in the Event List will now render fully expanded JSON when a JSON structure starts with a square bracket or curly bracket followed by a newline.

    • Humio is now a Falcon product. The Humio owl logo and icons are therefore replaced by beautiful falcons.

  • Documentation

  • Automation and Alerts

    • When creating new Actions, the new name will now stay when you change the Action Type without getting cleared. This also works when you want to change the New Action name while creating a New Action.

    • When you create or edit an action it will now show a warning dialog if you have unsaved changes.

    • A major change has been made to how alert queries are run in order to better reuse live queries when nodes are restarted in a Humio cluster. Find more details at Alerts.

    • With the new implementation for running alerts, alerts will now start faster after a node has been restarted, making it easier for alerts with a small search interval to be able to alert on events during the downtime.

  • GraphQL API

    • Deprecates the defaultSharedTimeIsLive input field on the updateDashboard GraphQL mutation, in favor of updateFrequency.

  • Configuration

    • New dynamic configuration MinimumHumioVersion, default value is 0.0.0, that allows setting a minimum Humio version that this cluster will accept starting on. This allows protecting against inadvertently later rolling back too far for some other feature to be turned on, that has an implied minimum version for support of that feature.

    • On cloud: added a configuration on dynamic identity providers to configure if users are allowed to be lazily created.

    • Added environment variable ENABLE_SANDBOXES to make it possible to enable and disable sandbox repositories.

  • Dashboards and Widgets

    • Implemented support for widgets with a fixed time interval on dashboards.

  • Queries

    • When searching for queries using the Query Monitor in Cluster Administration you can now filter queries based on internal and external query IDs.

  • Functions

    • Improved warning message when using groupBy() with limit=max and the limit is exceeded.

    • Query functions selectFromMin() and selectFromMax() are now generally available for use.

    • BREAKING CHANGE: Changes to the serialization format of the Intermediate Language representation of queries.

      Description: The serialization format used to serialize the intermediate language representation of queries has changed to a JSON format. This has multiple consequences for on-prem customers. During upgrades to this version and rollbacks from this version you can expect the following:

      • Queries can be slower than usual initially as the query cache clears itself.

      • Queries may cause deserialization errors if they are run during upgrade and two or more nodes have different versions. It is recommended to block all queries upon upgrade and downgrade to and from this version and have all nodes upgrade at the same time.

  • Other

    • In case view is not found we will try to fixup the cache on all cluster nodes.

    • It is now possible to select an entire permissions group when configuring permissions for a role.

      • Added the possibility of creating a role that grants permissions on the system and organization levels from the UI.

      • Updated the flow of creating and editing roles in the Understanding Your Organization pages.

    • In the dialog for entering a name, when creating a new entity (Alerts, Actions, Scheduled Searches, Parsing Data), hitting Enter without filling out the name field will now show an error and will not let you go on to the next page.

    • Permit the first character in the field name of a field being turned into a tag to be anything. If the first character does not match [a-zA-Z] then strip that from the resulting tag name. This does not alter the set of allowed names for tags, but allows the field names being turned into tags to have any character as the leading one, e.g. permitting examples such as &path and *path as field names to turn into the tag #path.

    • Allow any root user and any user with the PatchGlobal permission to use the global patch API. Previously required using the server-local special bootstraps root token, that would be valid only on the local node, thus hard to use via a load balancer.

    • Added support for writing H in place of minutes in the cron schedule of scheduled searches — see Cron Schedule Templates for details.

    • Added new system permission, PatchGlobal, enabling access to the global patch API.

    • Reduced memory usage for queries that include noResultUntilDone: true in their inputs. This reduces memory usage in queries that do "export" of an aggregate result via the Query API, as well as the "inner" queries in joins, and queries from scheduled searches.

    • When saving a parser, validate that the fields designated as tag fields have names that are valid as tag field names. Since packages with invalid parsers cannot be installed, if you have an invalid parser in a package, you will need to edit it to keep being able to install it.

    • Added an option to make token hashing output in json format. See tokenhashing usage described at Hashed Root Access Token.

    • When configuring SAML and OIDC for an organization, for users with the ManageOrganizations permission to enable/disable whether the IDP is Default and Humio managed.

Fixed in this release

  • Security

    • Update Netty to address CVE-2022-41915.

    • Update Scala to address CVE-2022-36944.

  • Falcon Data Replicator

    • Fixed a bug where a dropdown for choosing a parser was not visible in a dialog when creating a new FDR feed.

    • Removed the deprecated feature flag FdrFeeds

  • UI Changes

    • Fixed a bug in the computation of query metadata that is used by the UI, which, for example, caused problems showing pie charts with queries containing both groupBy() and top().

  • GraphQL API

    • Fixed an error when querying for actions in GraphQL on a deleted view.

    • Marked all feature flags as preview in GraphQL, which means that once they are no longer needed, they will be removed without being deprecated first.

  • Dashboards and Widgets

    • Fixed an issue where word wrap did not work in the Inspect Panel.

    • Fixed a bug where certain queries would make it seem that all widgets were incompatible, even though the table view still works.

      Importing a dashboard with Shared time enabled and Live disabled would import the dashboard with Live enabled. Likewise, when creating a new dashboard from a template, Live would be on.

    • The Apply Filter button on the dashboard correctly applies the typed filter again.

    • The Single Value color threshold list could get into a state where you could not type threshold values into the four text fields.

  • Functions

    • Fixed a recent bug which caused the category links from groupBy()-groups to be lost when a subsequent sort() was used, and also made grouping-based charts (bar, pie, heat map) unusable in such cases.

  • Other

    • Fixing an issue, where the sessions of a user wasn't revoked when the user was deleted.

    • Fixed a bug in decryption code used when decrypting downloaded files from bucket storage when version-for-bucket-writes=3. The bug did not allow to decrypt files larger than 2GB.

    • Fixes a bug where a placeholder would appear for the region selector on the login pages, even though it itself wouldn't be shown since it has no configured regions.

    • It is no longer possible to have an upload file action with a path in the file name. This would result in an unusable file being created.

    • Fixed an issue where some segments could stall the background process implementing event redaction. This could then result in segments not being merged. The visible symptom would be segments with topOffset attribute being -1, and MiniSegmentMergeLatencyLoggerJob logging that some segments are not being merged.

    • We have removed the @host field from the humio-activity logs and the #host tag from the humio-audit log, as we can no longer provide meaningful values for these. The @host field in the humio-metrics logs will remain, but its value will be changed to the vhost id (an integer number).

    • Fixed an issue where queries could fail when the requests within the cluster were more than 8 MB each.

    • Fixed an issue where delete events from a mini-segment could result in the merge of those mini-segments into the resulting target segment never got executed.

    • Fixed an issue where the HTTP threads (Akka pool) could get blocked while sending ingest requests to Kafka, which could result in Humio HTTP endpoints not responding.

    • Fixed an issue with tags in Event Forwarding, so that it is now possible to filter on tags using event forwarding rules, and the tags are present in the forwarded events.

    • Fixed an issue where LogScale could log secrets to the debug log when configured to use LDAP or when configured to use SSL for Kafka.

    • Fix a regression introduced in 1.46.0 that can cause Humio to fail to properly replay data from Kafka when a node is restarted.

  • Packages

    • Previously parsing packages was very strict, falling when detecting unsupported files. This is no longer the case, unsupported files will now be ignored and won't stop the package from installing.

Humio Server 1.56.3 LTS (2022-10-05)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.56.3LTS2022-10-05

Cloud

2023-09-30No1.30.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.56.3/server-1.56.3.tar.gz

These notes include entries from the following previous releases: 1.56.2

Bug fixes and updates.

New features and improvements

  • Falcon Data Replicator

    • The feature flag for FDR feeds has been removed. FDR feeds are now generally available.

  • UI Changes

    • The event lists column header menus have been redesigned to be simpler:

      • You can now click the border between columns header in the event to fit the column to the content.

      • The Event List column Format Panel has been updated to make it easier to manage columns.

      See Formatting Columns.

    • It is now possible to interact directly with the JSON properties and values in the EventList.

    • In the Event List you can assign data types to a column field. You can now make the setting the default for a fields and the setting is remembered when even the field is added to the Event List, e.g. from the fields panel on the Search page. The button for assigning default data type to a field can be found in the Data type dropdown menu in the column headers of the event list widget. See Field Data Types.

    • It is now possible to scroll to the selected event on the Search page.

    • Add UI for enabling and disabling social logins on the identity providers page.

    • The Log line format type in the Event List will now render fully expanded JSON when a JSON structure starts with a square bracket or curly bracket followed by a newline.

    • Humio is now a Falcon product. The Humio owl logo and icons are therefore replaced by beautiful falcons.

  • Documentation

  • Automation and Alerts

    • When creating new Actions, the new name will now stay when you change the Action Type without getting cleared. This also works when you want to change the New Action name while creating a New Action.

    • When you create or edit an action it will now show a warning dialog if you have unsaved changes.

    • A major change has been made to how alert queries are run in order to better reuse live queries when nodes are restarted in a Humio cluster. Find more details at Alerts.

    • With the new implementation for running alerts, alerts will now start faster after a node has been restarted, making it easier for alerts with a small search interval to be able to alert on events during the downtime.

  • GraphQL API

    • Deprecates the defaultSharedTimeIsLive input field on the updateDashboard GraphQL mutation, in favor of updateFrequency.

  • Configuration

    • New dynamic configuration MinimumHumioVersion, default value is 0.0.0, that allows setting a minimum Humio version that this cluster will accept starting on. This allows protecting against inadvertently later rolling back too far for some other feature to be turned on, that has an implied minimum version for support of that feature.

    • On cloud: added a configuration on dynamic identity providers to configure if users are allowed to be lazily created.

    • Added environment variable ENABLE_SANDBOXES to make it possible to enable and disable sandbox repositories.

  • Dashboards and Widgets

    • Implemented support for widgets with a fixed time interval on dashboards.

  • Queries

    • When searching for queries using the Query Monitor in Cluster Administration you can now filter queries based on internal and external query IDs.

  • Functions

    • Improved warning message when using groupBy() with limit=max and the limit is exceeded.

    • Query functions selectFromMin() and selectFromMax() are now generally available for use.

    • BREAKING CHANGE: Changes to the serialization format of the Intermediate Language representation of queries.

      Description: The serialization format used to serialize the intermediate language representation of queries has changed to a JSON format. This has multiple consequences for on-prem customers. During upgrades to this version and rollbacks from this version you can expect the following:

      • Queries can be slower than usual initially as the query cache clears itself.

      • Queries may cause deserialization errors if they are run during upgrade and two or more nodes have different versions. It is recommended to block all queries upon upgrade and downgrade to and from this version and have all nodes upgrade at the same time.

  • Other

    • In case view is not found we will try to fixup the cache on all cluster nodes.

    • It is now possible to select an entire permissions group when configuring permissions for a role.

      • Added the possibility of creating a role that grants permissions on the system and organization levels from the UI.

      • Updated the flow of creating and editing roles in the Understanding Your Organization pages.

    • In the dialog for entering a name, when creating a new entity (Alerts, Actions, Scheduled Searches, Parsing Data), hitting Enter without filling out the name field will now show an error and will not let you go on to the next page.

    • Permit the first character in the field name of a field being turned into a tag to be anything. If the first character does not match [a-zA-Z] then strip that from the resulting tag name. This does not alter the set of allowed names for tags, but allows the field names being turned into tags to have any character as the leading one, e.g. permitting examples such as &path and *path as field names to turn into the tag #path.

    • Allow any root user and any user with the PatchGlobal permission to use the global patch API. Previously required using the server-local special bootstraps root token, that would be valid only on the local node, thus hard to use via a load balancer.

    • Added support for writing H in place of minutes in the cron schedule of scheduled searches — see Cron Schedule Templates for details.

    • Added new system permission, PatchGlobal, enabling access to the global patch API.

    • Reduced memory usage for queries that include noResultUntilDone: true in their inputs. This reduces memory usage in queries that do "export" of an aggregate result via the Query API, as well as the "inner" queries in joins, and queries from scheduled searches.

    • When saving a parser, validate that the fields designated as tag fields have names that are valid as tag field names. Since packages with invalid parsers cannot be installed, if you have an invalid parser in a package, you will need to edit it to keep being able to install it.

    • Added an option to make token hashing output in json format. See tokenhashing usage described at Hashed Root Access Token.

    • When configuring SAML and OIDC for an organization, for users with the ManageOrganizations permission to enable/disable whether the IDP is Default and Humio managed.

Fixed in this release

  • Security

    • Update Scala to address CVE-2022-36944.

  • Falcon Data Replicator

    • Fixed a bug where a dropdown for choosing a parser was not visible in a dialog when creating a new FDR feed.

    • Removed the deprecated feature flag FdrFeeds

  • UI Changes

    • Fixed a bug in the computation of query metadata that is used by the UI, which, for example, caused problems showing pie charts with queries containing both groupBy() and top().

  • GraphQL API

    • Fixed an error when querying for actions in GraphQL on a deleted view.

    • Marked all feature flags as preview in GraphQL, which means that once they are no longer needed, they will be removed without being deprecated first.

  • Dashboards and Widgets

    • Fixed an issue where word wrap did not work in the Inspect Panel.

    • Fixed a bug where certain queries would make it seem that all widgets were incompatible, even though the table view still works.

      Importing a dashboard with Shared time enabled and Live disabled would import the dashboard with Live enabled. Likewise, when creating a new dashboard from a template, Live would be on.

    • The Apply Filter button on the dashboard correctly applies the typed filter again.

    • The Single Value color threshold list could get into a state where you could not type threshold values into the four text fields.

  • Functions

    • Fixed a recent bug which caused the category links from groupBy()-groups to be lost when a subsequent sort() was used, and also made grouping-based charts (bar, pie, heat map) unusable in such cases.

  • Other

    • Fixing an issue, where the sessions of a user wasn't revoked when the user was deleted.

    • Fixes a bug where a placeholder would appear for the region selector on the login pages, even though it itself wouldn't be shown since it has no configured regions.

    • It is no longer possible to have an upload file action with a path in the file name. This would result in an unusable file being created.

    • Fixed an issue where some segments could stall the background process implementing event redaction. This could then result in segments not being merged. The visible symptom would be segments with topOffset attribute being -1, and MiniSegmentMergeLatencyLoggerJob logging that some segments are not being merged.

    • We have removed the @host field from the humio-activity logs and the #host tag from the humio-audit log, as we can no longer provide meaningful values for these. The @host field in the humio-metrics logs will remain, but its value will be changed to the vhost id (an integer number).

    • Fixed an issue where queries could fail when the requests within the cluster were more than 8 MB each.

    • Fixed an issue where delete events from a mini-segment could result in the merge of those mini-segments into the resulting target segment never got executed.

    • Fixed an issue where the HTTP threads (Akka pool) could get blocked while sending ingest requests to Kafka, which could result in Humio HTTP endpoints not responding.

    • Fixed an issue with tags in Event Forwarding, so that it is now possible to filter on tags using event forwarding rules, and the tags are present in the forwarded events.

    • Fix a regression introduced in 1.46.0 that can cause Humio to fail to properly replay data from Kafka when a node is restarted.

  • Packages

    • Previously parsing packages was very strict, falling when detecting unsupported files. This is no longer the case, unsupported files will now be ignored and won't stop the package from installing.

Humio Server 1.56.2 LTS (2022-09-26)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.56.2LTS2022-09-26

Cloud

2023-09-30No1.30.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.56.2/server-1.56.2.tar.gz

Bug fixes and updates.

New features and improvements

  • Falcon Data Replicator

    • The feature flag for FDR feeds has been removed. FDR feeds are now generally available.

  • UI Changes

    • The event lists column header menus have been redesigned to be simpler:

      • You can now click the border between columns header in the event to fit the column to the content.

      • The Event List column Format Panel has been updated to make it easier to manage columns.

      See Formatting Columns.

    • It is now possible to interact directly with the JSON properties and values in the EventList.

    • In the Event List you can assign data types to a column field. You can now make the setting the default for a fields and the setting is remembered when even the field is added to the Event List, e.g. from the fields panel on the Search page. The button for assigning default data type to a field can be found in the Data type dropdown menu in the column headers of the event list widget. See Field Data Types.

    • It is now possible to scroll to the selected event on the Search page.

    • Add UI for enabling and disabling social logins on the identity providers page.

    • The Log line format type in the Event List will now render fully expanded JSON when a JSON structure starts with a square bracket or curly bracket followed by a newline.

    • Humio is now a Falcon product. The Humio owl logo and icons are therefore replaced by beautiful falcons.

  • Documentation

  • Automation and Alerts

    • When creating new Actions, the new name will now stay when you change the Action Type without getting cleared. This also works when you want to change the New Action name while creating a New Action.

    • When you create or edit an action it will now show a warning dialog if you have unsaved changes.

    • A major change has been made to how alert queries are run in order to better reuse live queries when nodes are restarted in a Humio cluster. Find more details at Alerts.

    • With the new implementation for running alerts, alerts will now start faster after a node has been restarted, making it easier for alerts with a small search interval to be able to alert on events during the downtime.

  • GraphQL API

    • Deprecates the defaultSharedTimeIsLive input field on the updateDashboard GraphQL mutation, in favor of updateFrequency.

  • Configuration

    • New dynamic configuration MinimumHumioVersion, default value is 0.0.0, that allows setting a minimum Humio version that this cluster will accept starting on. This allows protecting against inadvertently later rolling back too far for some other feature to be turned on, that has an implied minimum version for support of that feature.

    • On cloud: added a configuration on dynamic identity providers to configure if users are allowed to be lazily created.

    • Added environment variable ENABLE_SANDBOXES to make it possible to enable and disable sandbox repositories.

  • Dashboards and Widgets

    • Implemented support for widgets with a fixed time interval on dashboards.

  • Queries

    • When searching for queries using the Query Monitor in Cluster Administration you can now filter queries based on internal and external query IDs.

  • Functions

    • Improved warning message when using groupBy() with limit=max and the limit is exceeded.

    • Query functions selectFromMin() and selectFromMax() are now generally available for use.

    • BREAKING CHANGE: Changes to the serialization format of the Intermediate Language representation of queries.

      Description: The serialization format used to serialize the intermediate language representation of queries has changed to a JSON format. This has multiple consequences for on-prem customers. During upgrades to this version and rollbacks from this version you can expect the following:

      • Queries can be slower than usual initially as the query cache clears itself.

      • Queries may cause deserialization errors if they are run during upgrade and two or more nodes have different versions. It is recommended to block all queries upon upgrade and downgrade to and from this version and have all nodes upgrade at the same time.

  • Other

    • In case view is not found we will try to fixup the cache on all cluster nodes.

    • It is now possible to select an entire permissions group when configuring permissions for a role.

      • Added the possibility of creating a role that grants permissions on the system and organization levels from the UI.

      • Updated the flow of creating and editing roles in the Understanding Your Organization pages.

    • In the dialog for entering a name, when creating a new entity (Alerts, Actions, Scheduled Searches, Parsing Data), hitting Enter without filling out the name field will now show an error and will not let you go on to the next page.

    • Permit the first character in the field name of a field being turned into a tag to be anything. If the first character does not match [a-zA-Z] then strip that from the resulting tag name. This does not alter the set of allowed names for tags, but allows the field names being turned into tags to have any character as the leading one, e.g. permitting examples such as &path and *path as field names to turn into the tag #path.

    • Allow any root user and any user with the PatchGlobal permission to use the global patch API. Previously required using the server-local special bootstraps root token, that would be valid only on the local node, thus hard to use via a load balancer.

    • Added support for writing H in place of minutes in the cron schedule of scheduled searches — see Cron Schedule Templates for details.

    • Added new system permission, PatchGlobal, enabling access to the global patch API.

    • Reduced memory usage for queries that include noResultUntilDone: true in their inputs. This reduces memory usage in queries that do "export" of an aggregate result via the Query API, as well as the "inner" queries in joins, and queries from scheduled searches.

    • When saving a parser, validate that the fields designated as tag fields have names that are valid as tag field names. Since packages with invalid parsers cannot be installed, if you have an invalid parser in a package, you will need to edit it to keep being able to install it.

    • Added an option to make token hashing output in json format. See tokenhashing usage described at Hashed Root Access Token.

    • When configuring SAML and OIDC for an organization, for users with the ManageOrganizations permission to enable/disable whether the IDP is Default and Humio managed.

Fixed in this release

  • Falcon Data Replicator

    • Fixed a bug where a dropdown for choosing a parser was not visible in a dialog when creating a new FDR feed.

    • Removed the deprecated feature flag FdrFeeds

  • UI Changes

    • Fixed a bug in the computation of query metadata that is used by the UI, which, for example, caused problems showing pie charts with queries containing both groupBy() and top().

  • GraphQL API

    • Fixed an error when querying for actions in GraphQL on a deleted view.

    • Marked all feature flags as preview in GraphQL, which means that once they are no longer needed, they will be removed without being deprecated first.

  • Dashboards and Widgets

    • Fixed an issue where word wrap did not work in the Inspect Panel.

    • Fixed a bug where certain queries would make it seem that all widgets were incompatible, even though the table view still works.

      Importing a dashboard with Shared time enabled and Live disabled would import the dashboard with Live enabled. Likewise, when creating a new dashboard from a template, Live would be on.

    • The Apply Filter button on the dashboard correctly applies the typed filter again.

    • The Single Value color threshold list could get into a state where you could not type threshold values into the four text fields.

  • Functions

    • Fixed a recent bug which caused the category links from groupBy()-groups to be lost when a subsequent sort() was used, and also made grouping-based charts (bar, pie, heat map) unusable in such cases.

  • Other

    • Fixing an issue, where the sessions of a user wasn't revoked when the user was deleted.

    • Fixes a bug where a placeholder would appear for the region selector on the login pages, even though it itself wouldn't be shown since it has no configured regions.

    • It is no longer possible to have an upload file action with a path in the file name. This would result in an unusable file being created.

    • Fixed an issue where some segments could stall the background process implementing event redaction. This could then result in segments not being merged. The visible symptom would be segments with topOffset attribute being -1, and MiniSegmentMergeLatencyLoggerJob logging that some segments are not being merged.

    • We have removed the @host field from the humio-activity logs and the #host tag from the humio-audit log, as we can no longer provide meaningful values for these. The @host field in the humio-metrics logs will remain, but its value will be changed to the vhost id (an integer number).

    • Fixed an issue where queries could fail when the requests within the cluster were more than 8 MB each.

    • Fixed an issue where delete events from a mini-segment could result in the merge of those mini-segments into the resulting target segment never got executed.

    • Fixed an issue where the HTTP threads (Akka pool) could get blocked while sending ingest requests to Kafka, which could result in Humio HTTP endpoints not responding.

    • Fixed an issue with tags in Event Forwarding, so that it is now possible to filter on tags using event forwarding rules, and the tags are present in the forwarded events.

    • Fix a regression introduced in 1.46.0 that can cause Humio to fail to properly replay data from Kafka when a node is restarted.

  • Packages

    • Previously parsing packages was very strict, falling when detecting unsupported files. This is no longer the case, unsupported files will now be ignored and won't stop the package from installing.

Humio Server 1.56.1 GA (2022-09-20)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.56.1GA2022-09-20

Cloud

2023-09-30No1.30.0No

Available for download two days after release.

Update.

New features and improvements

  • UI Changes

    • Humio is now a Falcon product. The Humio owl logo and icons are therefore replaced by beautiful falcons.

Humio Server 1.56.0 GA (2022-09-06)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.56.0GA2022-09-06

Cloud

2023-09-30No1.30.0No

Available for download two days after release.

Bug fixes and updates.

New features and improvements

  • UI Changes

    • The event lists column header menus have been redesigned to be simpler:

      • You can now click the border between columns header in the event to fit the column to the content.

      • The Event List column Format Panel has been updated to make it easier to manage columns.

      See Formatting Columns.

    • It is now possible to interact directly with the JSON properties and values in the EventList.

    • In the Event List you can assign data types to a column field. You can now make the setting the default for a fields and the setting is remembered when even the field is added to the Event List, e.g. from the fields panel on the Search page. The button for assigning default data type to a field can be found in the Data type dropdown menu in the column headers of the event list widget. See Field Data Types.

    • Humio is now a Falcon product. The Humio owl logo and icons are therefore replaced by beautiful falcons.

  • Dashboards and Widgets

    • Implemented support for widgets with a fixed time interval on dashboards.

  • Functions

    • BREAKING CHANGE: Changes to the serialization format of the Intermediate Language representation of queries.

      Description: The serialization format used to serialize the intermediate language representation of queries has changed to a JSON format. This has multiple consequences for on-prem customers. During upgrades to this version and rollbacks from this version you can expect the following:

      • Queries can be slower than usual initially as the query cache clears itself.

      • Queries may cause deserialization errors if they are run during upgrade and two or more nodes have different versions. It is recommended to block all queries upon upgrade and downgrade to and from this version and have all nodes upgrade at the same time.

  • Other

      • Added the possibility of creating a role that grants permissions on the system and organization levels from the UI.

      • Updated the flow of creating and editing roles in the Understanding Your Organization pages.

Fixed in this release

  • Falcon Data Replicator

    • Removed the deprecated feature flag FdrFeeds

  • GraphQL API

    • Marked all feature flags as preview in GraphQL, which means that once they are no longer needed, they will be removed without being deprecated first.

  • Dashboards and Widgets

    • Fixed a bug where certain queries would make it seem that all widgets were incompatible, even though the table view still works.

      Importing a dashboard with Shared time enabled and Live disabled would import the dashboard with Live enabled. Likewise, when creating a new dashboard from a template, Live would be on.

  • Other

    • It is no longer possible to have an upload file action with a path in the file name. This would result in an unusable file being created.

    • Fixed an issue where some segments could stall the background process implementing event redaction. This could then result in segments not being merged. The visible symptom would be segments with topOffset attribute being -1, and MiniSegmentMergeLatencyLoggerJob logging that some segments are not being merged.

    • Fixed an issue with tags in Event Forwarding, so that it is now possible to filter on tags using event forwarding rules, and the tags are present in the forwarded events.

Humio Server 1.55.0 GA (2022-08-30)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.55.0GA2022-08-30

Cloud

2023-09-30No1.30.0No

Available for download two days after release.

Bug fixes and updates.

New features and improvements

  • UI Changes

    • It is now possible to scroll to the selected event on the Search page.

  • Automation and Alerts

    • When creating new Actions, the new name will now stay when you change the Action Type without getting cleared. This also works when you want to change the New Action name while creating a New Action.

    • When you create or edit an action it will now show a warning dialog if you have unsaved changes.

  • Functions

  • Other

    • It is now possible to select an entire permissions group when configuring permissions for a role.

    • In the dialog for entering a name, when creating a new entity (Alerts, Actions, Scheduled Searches, Parsing Data), hitting Enter without filling out the name field will now show an error and will not let you go on to the next page.

Fixed in this release

  • Other

    • Fixing an issue, where the sessions of a user wasn't revoked when the user was deleted.

Humio Server 1.54.0 GA (2022-08-23)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.54.0GA2022-08-23

Cloud

2023-09-30No1.30.0No

Available for download two days after release.

Bug fixes and updates.

New features and improvements

  • UI Changes

    • The Log line format type in the Event List will now render fully expanded JSON when a JSON structure starts with a square bracket or curly bracket followed by a newline.

  • Configuration

    • Added environment variable ENABLE_SANDBOXES to make it possible to enable and disable sandbox repositories.

  • Other

    • Added an option to make token hashing output in json format. See tokenhashing usage described at Hashed Root Access Token.

    • When configuring SAML and OIDC for an organization, for users with the ManageOrganizations permission to enable/disable whether the IDP is Default and Humio managed.

Fixed in this release

  • Functions

    • Fixed a recent bug which caused the category links from groupBy()-groups to be lost when a subsequent sort() was used, and also made grouping-based charts (bar, pie, heat map) unusable in such cases.

  • Other

    • Fixed an issue where queries could fail when the requests within the cluster were more than 8 MB each.

  • Packages

    • Previously parsing packages was very strict, falling when detecting unsupported files. This is no longer the case, unsupported files will now be ignored and won't stop the package from installing.

Humio Server 1.53.0 GA (2022-08-16)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.53.0GA2022-08-16

Cloud

2023-09-30No1.30.0No

Available for download two days after release.

Bug fixes and updates.

New features and improvements

  • UI Changes

    • Add UI for enabling and disabling social logins on the identity providers page.

  • Queries

    • When searching for queries using the Query Monitor in Cluster Administration you can now filter queries based on internal and external query IDs.

  • Other

    • Reduced memory usage for queries that include noResultUntilDone: true in their inputs. This reduces memory usage in queries that do "export" of an aggregate result via the Query API, as well as the "inner" queries in joins, and queries from scheduled searches.

Fixed in this release

  • Dashboards and Widgets

    • Fixed an issue where word wrap did not work in the Inspect Panel.

    • The Apply Filter button on the dashboard correctly applies the typed filter again.

    • The Single Value color threshold list could get into a state where you could not type threshold values into the four text fields.

  • Other

    • We have removed the @host field from the humio-activity logs and the #host tag from the humio-audit log, as we can no longer provide meaningful values for these. The @host field in the humio-metrics logs will remain, but its value will be changed to the vhost id (an integer number).

    • Fixed an issue where delete events from a mini-segment could result in the merge of those mini-segments into the resulting target segment never got executed.

Humio Server 1.52.0 GA (2022-08-09)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.52.0GA2022-08-09

Cloud

2023-09-30No1.30.0No

Available for download two days after release.

Bug fixes and updates.

New features and improvements

  • Falcon Data Replicator

    • The feature flag for FDR feeds has been removed. FDR feeds are now generally available.

  • Documentation

  • Automation and Alerts

    • A major change has been made to how alert queries are run in order to better reuse live queries when nodes are restarted in a Humio cluster. Find more details at Alerts.

    • With the new implementation for running alerts, alerts will now start faster after a node has been restarted, making it easier for alerts with a small search interval to be able to alert on events during the downtime.

  • GraphQL API

    • Deprecates the defaultSharedTimeIsLive input field on the updateDashboard GraphQL mutation, in favor of updateFrequency.

  • Configuration

    • New dynamic configuration MinimumHumioVersion, default value is 0.0.0, that allows setting a minimum Humio version that this cluster will accept starting on. This allows protecting against inadvertently later rolling back too far for some other feature to be turned on, that has an implied minimum version for support of that feature.

    • On cloud: added a configuration on dynamic identity providers to configure if users are allowed to be lazily created.

  • Functions

  • Other

    • In case view is not found we will try to fixup the cache on all cluster nodes.

    • Permit the first character in the field name of a field being turned into a tag to be anything. If the first character does not match [a-zA-Z] then strip that from the resulting tag name. This does not alter the set of allowed names for tags, but allows the field names being turned into tags to have any character as the leading one, e.g. permitting examples such as &path and *path as field names to turn into the tag #path.

    • Allow any root user and any user with the PatchGlobal permission to use the global patch API. Previously required using the server-local special bootstraps root token, that would be valid only on the local node, thus hard to use via a load balancer.

    • Added support for writing H in place of minutes in the cron schedule of scheduled searches — see Cron Schedule Templates for details.

    • Added new system permission, PatchGlobal, enabling access to the global patch API.

    • When saving a parser, validate that the fields designated as tag fields have names that are valid as tag field names. Since packages with invalid parsers cannot be installed, if you have an invalid parser in a package, you will need to edit it to keep being able to install it.

Fixed in this release

  • Falcon Data Replicator

    • Fixed a bug where a dropdown for choosing a parser was not visible in a dialog when creating a new FDR feed.

  • UI Changes

    • Fixed a bug in the computation of query metadata that is used by the UI, which, for example, caused problems showing pie charts with queries containing both groupBy() and top().

  • GraphQL API

    • Fixed an error when querying for actions in GraphQL on a deleted view.

  • Other

    • Fixes a bug where a placeholder would appear for the region selector on the login pages, even though it itself wouldn't be shown since it has no configured regions.

Humio Server 1.51.3 LTS (2022-12-21)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.51.3LTS2022-12-21

Cloud

2023-08-31No1.30.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.51.3/server-1.51.3.tar.gz

These notes include entries from the following previous releases: 1.51.0, 1.51.1, 1.51.2

Bug fixes and updates.

Removed

Items that have been removed as of this release.

API

  • The deprecated REST API for actions has been removed, except for the endpoint for testing an action.

  • The deprecated REST API for parsers has been removed.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • Deprecated enabledFeatures query. Use the new featureFlags query instead.

New features and improvements

  • Security

    • The version of Jackson has been upgraded to address CVE-2022-42003 vulnerability.

  • Falcon Data Replicator

    • FDR polling is now turned on by default. Whether FDR polling should be turned on or off on a node can be configured using the ENABLE_FDR_POLLING_ON_NODE configuration variable.

      • If an S3 file is found to be incorrectly formatted during FDR ingest, it will not be ingested completely, but an attempt is made to ingest the remaining S3 files of the SQS message.

      • If an S3 file cannot be found during FDR ingest, it will not be ingested, but an attempt is made to ingest the remaining S3 files of the SQS message.

    • Added environment variable FDR_USE_PROXY which makes the fdr job use the proxy settings specified with: HTTP_PROXY_* environment variables.

  • UI Changes

    • The design of the Time Selector has been updated, and it now features an Apply button on the dashboard page. See Time Interval Settings.

    • Field columns now support multiple formatting options. See Formatting Columns for details.

    • Add missing accessibility features to the login page.

    • In lists of users, with user avatars containing user initials, the current user would sometimes appear to have an opening parenthesis as their last initial.

    • The Live checkbox is now no longer checked automatically when changing the value of the time window in the Time Selector. See Changing Time Interval for details.

    • If Humio fails to start because the cluster is being upgraded, a dedicated message will show when launching the UI.

    • The Save As... button is now always displayed on the Search page, see it described at Saving Searches.

    • Improved keyboard accessibility for creating repositories and views.

    • New styling of errors on search and dashboard pages.

    • Adds an icon and a hint to a disabled side navigation menu item that tells the user the reason for it being disabled.

    • Toggle switches anywhere in the UI can now be accessed using the tab key and can be accessed using the keyboard.

    • When editing an email action in the UI and adding multiple recipients, it is now possible to add a space after the comma in the comma-separated list of recipients.

  • Documentation

    • All documentation links have been updated after the documentation site has been restructured. Please contact support, if you experience any broken links.

  • Automation and Alerts

    • Fixed a bug where an alert with name longer than 50 characters could not be edited.

  • GraphQL API

    • Added preview fields isClusterBeingUpdated and minimumNodeVersion to the GraphQL Cluster object type.

    • Added a new dynamic configuration flag QueryResultRowCountLimit that globally limits how many results (events) a query can return. This flag can be set by administrators through GraphQL. See Limits & Standards for more details.

    • The GQL API mutation updateDashboard has been updated to take a new argument updateFrequency which can currently only be NEVER or REALTIME, which correspond respectively to "dashboard where queries are never updated after first completion" and "dashboard where query results are updated indefinitely".

    • Expose a new GraphQL type with feature flag descriptions and whether they are experimental.

    • Added a GraphQL mutation for testing an action. It is still in preview, but it will replace the equivalent REST endpoint soon.

    • Improved error messaging of GraphQL queries and mutations for alerts, scheduled searches and actions in cases where a given repository or view cannot be found.

  • Configuration

    • Adds a new metric for measuring the merge latency, which is defined as the latency between the last mini-segment being written in a sequence with the same merge target, and those mini-segments being merged. The metric name is segment-merge-latency-ms.

    • Detect need for higher autoshard count by monitoring ingest request flow in the cluster. Dynamically increase the number of autoshards for each datasource to keep flow on each resulting shard below approximately 2MB/s. New dynamic configuration for this that sets the target maximum rate of ingest for each shard of a datasource: TargetMaxRateForDatasource. Default value is 2000000 (2 MB).

    • Added a new environment variable GLOB_MATCH_LIMIT which sets the maximum number of rows for csv_file in match(..., file=csv_file, glob=true) function. Previously MAX_STATE_SIZE was used to determine this limit. The default value of this variable is 20000. If you've changed the value of MAX_STATE_SIZE, we recommend that you also change GLOB_MATCH_LIMIT to the same value for a seamless upgrade.

    • Default value of configuration variable S3_ARCHIVING_WORKERCOUNT raised from 1 to (vCPU/4).

    • Added a new dynamic configuration GroupDefaultLimit. This can be done through GraphQL. See Limits & Standards for details. If you've changed the value of MAX_STATE_LIMIT, we recommend that you also change GroupDefaultLimit and GroupMaxLimit to the same value for a seamless upgrade, see groupBy() for details.

    • Introduced new dynamic configuration LiveQueryMemoryLimit. It can be set using GraphQL. See Limits & Standards for details.

    • Introduced new dynamic configuration JoinRowLimit. It can be set using GraphQL and can be used as an alternative to the environment variable MAX_JOIN_LIMIT. If the JoinRowLimit is set, then its value will be used instead of MAX_JOIN_LIMIT. If it is not set, then MAX_JOIN_LIMIT will be used.

    • Introduced new dynamic configuration StateRowLimit. It can be set using GraphQL. See Limits & Standards for details.

    • Improve the error message if Humio is configured to use bucket storage, but the credentials for the bucket are not configured.

    • Change default value for configuration AUTOSHARDING_MAX from 16 to 128.

    • Add environment variable EULA_URL to specificy url for terms and conditions.

    • Added a link to humio-activity repository for debugging IDP configurations to the page for setting up the same.

    • Bucket storage now has support for a new format for the keys (file names) for the files placed in the bucket. When the new format is applied, the listing of files only happens for the prefixes "tmp/" and "globalsnapshots/". This helps products such a "HCP". The new format is applied only to buckets created after the dynamic configuration BucketStorageKeySchemeVersion has been set to "2". Existing cluster can start using the new format for new files by setting this dynamic configuration. The change will take effect after restarting the cluster. When creating a new Humio cluster, the new format is the default. The new format is supported only on Humio version 1.41+.

    • Introduced new dynamic configuration GroupMaxLimit. It can be set using GraphQL. See Limits & Standards for details.

    • Support for KMS on S3 bucket for Bucket Storage. Specify full ARN of the key. The key_id is persisted in the internal BucketEntity so that a later change of the ID of the key to use for uploads will make Humio still refer the old keyID when downloading files uploaded using the previous key. Setting a new value for the target key results in a fresh internal bucket entity to track which files used kms and which did not. For simplicity it is recommended to not mix KMS and non-KMS configurations on the same S3 bucket.

    • New file format for files uploaded to bucket storage that allows files larger than 2GB to be written to bucket storage. This may be turned on by setting the dynamic configuration BucketStorageWriteVersion to 3. When creating a new Humio clusters, the new format is the default. The new format is supported only on Humio version 1.41+.

    • New configurations BUCKET_STORAGE_SSE_COMPATIBLE that makes bucket storage not verify checksums of raw objects after uploading to an S3. This option is turned on automatically is KMS is enabled (see S3_STORAGE_KMS_KEY_ARN) but is available directly here for use with other S3 compatible providers where verifying even content length does not work.

      Mini segments usually get merged if their event timestamps span more than MAX_HOURS_SEGMENT_OPEN. Mini segments created as part of backfilling did not follow this rule, but will now get merged if their ingest timestamps span more than MAX_HOURS_SEGMENT_OPEN.

    • Adds a new logger job that logs the age of an unmerged miniSegment if the age exceeds the threshold set by the env variable MINI_SEGMENT_MAX_MERGE_DELAY_MS_BEFORE_WARNING. The default value of MINI_SEGMENT_MAX_MERGE_DELAY_MS_BEFORE_WARNING is 2 x MAX_HOURS_SEGMENT_OPEN. MAX_HOURS_SEGMENT_OPEN defaults to 24 hours. The error log produced looks like: Oldest unmerged miniSegment is older than the threshold thresholdMs={value} miniSegmentAgeMs={value} segment={value}.

    • Introduced new dynamic configuration QueryMemoryLimit. It can be set using GraphQL. See also LiveQueryMemoryLimit for live queries. For more details, see Limits & Standards.

  • Dashboards and Widgets

    • Applied stylistic changes for the Inspect Panel used in Widget Editor.

    • Dashboards can now be configured to not update after the initial search has completed. This mode is mainly meant to be used when a dashboard is interactive and not for wall-mounted monitors that should update continually. The feature can be accessed from the Dashboard properties panel when a dashboard is put in edit-mode. See Working in Edit Mode.

    • Bar Chart widget:

      • The Y-axis can now start at smaller values than 1 for logarithmic scales, when the data contain small enough values.

      • It now has an Auto setting for the Input Data Format property, see Wide or Long Input Format for details.

      • Now works with bucket query results.

    • Added empty states for all widget types that will be rendered when there are no results.

    • When importing existing dashboard with a static Shared time, recent changes in the time selection would make those dashboards live.

    • Introducing the Heat Map widget that visualizes aggregated data as a colorised grid.

    • The Pie Chart widget now uses the first column for the series as a fall back option.

    • The Dashboard page now displays the current cluster status.

    • Note widget:

      • Default background color is now Auto.

      • Introduced the text color configuration option.

    • Sorting of Pie Chart widget categories, descending by value. Categories grouped as Others will always be last.

    • The widget legend column width is now based on the custom series title (if specified) instead of the original series name.

    • The Normalize option for the World Map widget has been replaced by a third magnitude mode named None, which results in fixed size and opacity for all marks.

    • Table widgets will now break lines for newline characters in columns.

    • Better handling of dashboard connections issues during restarts and upgrades.

    • Single Value widget:

      • Missing buckets are now shown as gaps on the sparkline.

      • Isolated data points are now visualized as dots on the sparkline.

    • Pie Chart widget now uses the first column for the series as a fall back option.

    • Single Value widget new configuration: deprecated field use-colorised-thresholds in favor of color-method.

      Single Value widget Editor: the configuration option Enable Thresholds is being replaced by an option called Method under the Colors section.

  • Log Collector

    • The Log Collector download page has been enabled for on-prem deployments.

  • Functions

    • Added validation to the field and key parameters of the join() function, so empty lists will be rejected with a meaningful error message.

    • The groupBy() function now accepts max as value for the limit parameter, which sets the limit to the largest allowed value (as configured by the dynamic configuration GroupMaxLimit).

    • Improved the phrasing of the warning shown when groupBy() exceeds the max or default limit.

    • Added validation to the field parameter of the kvParse() function, so empty lists will be rejected with a meaningful error message.

  • Other

    • All users will not have access to the audit log or search all view by default anymore. Access can be granted with permissions.

    • Bump the version of the Monaco code editor.

    • Streaming queries that fail to validate now return a message of why validation failed.

    • Fix a bug causing Humio's digest coordinator to allow nodes to take over digest without catching up to the current leader. This could cause the new leader to replay more data from Kafka than necessary.

    • Fixed an issue where query auto-completion sometimes wouldn't show the documentation for the suggested functions.

    • Adds a new metric for the temp disk usage. The metric name is temp-disk-usage-bytes and denotes how many bytes are used.

    • Added a log message with the maximum state size seen by the live part of live queries.

    • Include the requester in logs from QuerySessions when a live query is restarted or cancelled.

    • The audit log system repository on Cloud has been replaced with a view, so that dashboards etc. can be created on top of audit log data.

    • Make BucketStorageUploadJob only log at info level rather than error if a segment upload fails because the segment has been removed from the host. This can happen if node X tries to upload a segment, but node Y beats it to the punch. Node X may then choose to remove its copy before the upload completes.

    • When unregistering a node from a cluster, return a validation error if it is still alive. Hosts should be shut down before attempting to remove them from the cluster. This validation can be skipped using the same accept-data-loss parameter that also disables other validations for the unregistration endpoint.

    • Added detection and handling of all queries being blocked during Humio upgrades.

    • Added a log of the approximate query result size before transmission to the frontend, captured by the approximateResultBeforeSerialization key.

    • Add flag whether a feature is experimental.

    • Added a log line for when a query exceeds its allotted memory quota.

    • The referrer meta tag for Humio has been changed from no-referrer to same-origin.

    • Compute next set of Prometheus metrics only in a single thread concurrently. If more requests arrive, then the next request gets the previous response.

    • Make a number of improvements to the digest partition coordinator. The coordinator now tries harder to avoid assigning digest to nodes that are not caught up on fetching segments from the other nodes. It also does a better job unassigning digest from dead nodes in edge cases.

    • Fix an unhandled IO exception from TempDirUsageJob. The consequence of the uncaught exception was only noise in the error log.

    • Added a new action type that creates a CSV file from the query result and uploads it to Humio to be used with the match() query function. See Action Type: Upload File.

    • Java in the docker images no longer has the cap_net_bind_service capability and thus Humio cannot bind directly to privileged ports when running as a non-root user.

    • Add warning when a multitenancy user is changing data retention on an unlimited repository.

    • Improved performance of NDJSON format in S3 Archiving.

    • Fix a bug that could cause Humio to spuriously log errors warning about segments not being merged for datasources doing backfilling.

    • Humio now logs digest partition assignments regularly. The logs can be found using the query class=*DigestLeadershipLoggerJob*.

    • All feature flags now contains a textual description about what features are hidden behind the flag.

    • Adds a logger job for cluster management stats it log the stats every 2 minutes, which makes them searchable in Humio.

      The logs belong to the class c.h.c.ClusterManagementStatsLoggerJob, logs for all segments contains globalSegmentStats log about singular segments starts with segmentStats.

    • Remove remains of default groups and roles. The concept was replaced with UserRoles.

Fixed in this release

  • Security

    • Update Netty to address CVE-2022-24823.

    • Update Netty to address CVE-2022-41915.

    • Bump javax.el to address CVE-2021-28170.

    • Update Scala to address CVE-2022-36944.

  • Falcon Data Replicator

    • FDR Ingest will no longer fail on events that are larger than the maximum allowed event size. Instead, such messages will be truncated.

  • UI Changes

    • Prevent the UI showing errors for smaller connection issues while restarting.

    • Websocket connections are now kept open when transitioning pages, and are used more efficiently for syntax highlighting.

    • Fixed an issue where some warnings would show twice.

    • Intermediate network issues are not reported immediately as an error in the UI.

    • Cloud: Updated the layout for license key page.

    • Fix the dropdown menus closing too early on the home page.

    • Fixed a bug where the "=" and "/=" buttons did not appear on cells in the event list where they should.

    • When viewing the events behind e.g. a Time Chart, the events will now only display with the @timestamp and @rawstring columns.

  • GraphQL API

    • Fix the assets GraphQL query in organizations with views that are not 1-to-1 linked.

  • Configuration

    • Fixed a bug that could result in merging small ("undersized") segments even if the resulting segment would then have a wider than desired time span. The goal it to not produce segments that span more than the 10% of the retention setting for time for the repository. If no time-based retention is configured on the repository, then 3 times the value of configuration variable MAX_HOURS_SEGMENT_OPEN is applied as limit. For default settings, that results in 72 hours.

    • Fixed an issue where event forwarding still showed as beta.

    • Fixed an issue where delete events from a mini-segment could result in the merge of those mini-segments into the resulting target segment never got executed.

      Index in block needs reading from blockwriter before adding each item.

      Fixed a bug where the @id field of events in live query were off by one.

  • Dashboards and Widgets

    • Fixed a bug where certain queries would make it seem that all widgets were incompatible, even though the table view still works.

      Importing a dashboard with Shared time enabled and Live disabled would import the dashboard with Live enabled. Likewise, when creating a new dashboard from a template, Live would be on.

    • The Apply Filter button on the dashboard correctly applies the typed filter again.

    • The theme toggle on a shared dashboard was moved to the header panel and no longer overlaps with any widgets.

    • The Time Chart widget regression line is no longer affected by the interpolation setting.

  • Functions

    • Fixed a bug where using eval as an argument to a function would result in a confusing error message.

    • Fixed a bug where ioc:lookup() would sometimes give incorrect results when negated.

    • Revised some of the error messages and warnings regarding join() and selfJoin().

    • Fixed a recent bug which caused the category links from groupBy()-groups to be lost when a subsequent sort() was used, and also made grouping-based charts (bar, pie, heat map) unusable in such cases.

    • Fixed a bug related to query result metadata for some functions when used as the last aggegate function in a query.

    • Fixed a bug where the writeJson() function would write any field starting with a case-insensitive inf or infinity prefix as a null value in the resulting JSON.

  • Other

    • Make streaming queries search segments newest-to-oldest rather than oldest-to-newest. Streaming queries do not ensure the order of exported events anyway, and searching newest-to-oldest is more efficient.

    • Fix a bug where changing a role for a user under a repository would trigger infinite network requests.

    • Centralise decision for names of files in bucket, allow more than one variant.

      Improved hover messages for strings.

    • Fixed an issue where query auto-completion would sometimes delete existing parentheses.

    • If a segment is deleted or otherwise disappears from global while Humio is attempting to upload it to bucket storage, the upload will now be dropped with an info-level log, rather than requeued with an error log.

    • Fixes the placement of a confirmation dialog when attempting to change retention.

    • Fixed a bug in decryption code used when decrypting downloaded files from bucket storage when version-for-bucket-writes=3. The bug did not allow to decrypt files larger than 2GB.

    • Humio will now clean up its tmp directories by deleting all "humiotmp" directories in the data directory when terminating gracefully.

      Fix a regression in the launcher script causing JVM_LOG_DIR to not be evaluated relative to the Humio base install path. All paths in the launcher script should now be relative to the base install path, which is the directory containing the bin folder.

      Fix a bug that could cause merge targets to be cached indefinitely if the associated minis had their mergeTarget unset. The effect was a minor memory leak.

      Fix a bug that could cause Humio to attempt to merge mini-segments from one datasource into a segment in another datasource, causing an error to be thrown.

      When configuring thread priorities, Humio will no longer attempt to call the native setpriority function. It will instead only call the Java API for setting thread priority.

    • Fixed an issue for ephemeral disk based installs where segment files could stay longer on local disks than they were required to, in cases where some nodes listed in the cluster were not alive for extended periods of time.

    • Fixed an issue where JSON parsing on ingest and in the query language was inefficient for large JSON objects.

    • Fix performance issue for users with access to many views.

    • Improve file path handling in DiskSpaceJob to eliminate edge cases where the job might not have been able to tell if a file was on primary or secondary storage.

    • Fix type in Unregisters node text on cluster admin UI.

      • Fixed an issue where event forwarder properties were not properly validated.

      • Reduced the timeout used when testing event forwarders in order to get a better error when timeouts happen.

    • Fix a bug that could cause a NumberFormatException to be thrown from ZooKeeperStatsClient.

    • Fixed an issue where some segments could stall the background process implementing event redaction. This could then result in segments not being merged. The visible symptom would be segments with topOffset attribute being -1, and MiniSegmentMergeLatencyLoggerJob logging that some segments are not being merged.

    • Fix a bug causing digesters to continue digesting even if the local disk is full. The digester will now pause digesting and error log if this occurs.

    • Fix response entities not being discarded in error cases for the proxyqueryjobs endpoint, which could cause warnings in the log.

    • Update org.json:json to address a vulnerability that could cause stack overflows.

    • Fix an issue causing the event forwarding feature to incorrectly reject topic names that contained a dash (-).

    • Fix an issue that could rarely cause exceptions to be thrown from Segments.originalBytesWritten, causing noise in the log.

    • Fix an issue causing Humio to create a large number of temporary directories in the data directory.

    • Bump woodstox to address SNYK-JAVA-COMFASTERXMLWOODSTOX-2928754.

    • Fixed an issue where queries could fail when the requests within the cluster were more than 8 MB each.

    • Some errors messages wrongly pointed to the beginning of the query.

    • Kafka upgrades to 3.2.0 in the Docker images and in the Humio dependencies.

    • Fixed an issue where LogScale could log secrets to the debug log when configured to use LDAP or when configured to use SSL for Kafka.

    • Fix a regression introduced in 1.46.0 that can cause Humio to fail to properly replay data from Kafka when a node is restarted.

    • Fixed an issue where strings like Nana and Information could be interpreted as NaN (not-a-number) and infinity, respectively.

    • Fixed a bug where multiline comments weren't always highlighted correctly.

Humio Server 1.51.2 LTS (2022-10-05)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.51.2LTS2022-10-05

Cloud

2023-08-31No1.30.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.51.2/server-1.51.2.tar.gz

These notes include entries from the following previous releases: 1.51.0, 1.51.1

Bug fixes and updates.

Removed

Items that have been removed as of this release.

API

  • The deprecated REST API for actions has been removed, except for the endpoint for testing an action.

  • The deprecated REST API for parsers has been removed.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • Deprecated enabledFeatures query. Use the new featureFlags query instead.

New features and improvements

  • Falcon Data Replicator

    • FDR polling is now turned on by default. Whether FDR polling should be turned on or off on a node can be configured using the ENABLE_FDR_POLLING_ON_NODE configuration variable.

      • If an S3 file is found to be incorrectly formatted during FDR ingest, it will not be ingested completely, but an attempt is made to ingest the remaining S3 files of the SQS message.

      • If an S3 file cannot be found during FDR ingest, it will not be ingested, but an attempt is made to ingest the remaining S3 files of the SQS message.

    • Added environment variable FDR_USE_PROXY which makes the fdr job use the proxy settings specified with: HTTP_PROXY_* environment variables.

  • UI Changes

    • The design of the Time Selector has been updated, and it now features an Apply button on the dashboard page. See Time Interval Settings.

    • Field columns now support multiple formatting options. See Formatting Columns for details.

    • Add missing accessibility features to the login page.

    • In lists of users, with user avatars containing user initials, the current user would sometimes appear to have an opening parenthesis as their last initial.

    • The Live checkbox is now no longer checked automatically when changing the value of the time window in the Time Selector. See Changing Time Interval for details.

    • If Humio fails to start because the cluster is being upgraded, a dedicated message will show when launching the UI.

    • The Save As... button is now always displayed on the Search page, see it described at Saving Searches.

    • Improved keyboard accessibility for creating repositories and views.

    • New styling of errors on search and dashboard pages.

    • Adds an icon and a hint to a disabled side navigation menu item that tells the user the reason for it being disabled.

    • Toggle switches anywhere in the UI can now be accessed using the tab key and can be accessed using the keyboard.

    • When editing an email action in the UI and adding multiple recipients, it is now possible to add a space after the comma in the comma-separated list of recipients.

  • Documentation

    • All documentation links have been updated after the documentation site has been restructured. Please contact support, if you experience any broken links.

  • Automation and Alerts

    • Fixed a bug where an alert with name longer than 50 characters could not be edited.

  • GraphQL API

    • Added preview fields isClusterBeingUpdated and minimumNodeVersion to the GraphQL Cluster object type.

    • Added a new dynamic configuration flag QueryResultRowCountLimit that globally limits how many results (events) a query can return. This flag can be set by administrators through GraphQL. See Limits & Standards for more details.

    • The GQL API mutation updateDashboard has been updated to take a new argument updateFrequency which can currently only be NEVER or REALTIME, which correspond respectively to "dashboard where queries are never updated after first completion" and "dashboard where query results are updated indefinitely".

    • Expose a new GraphQL type with feature flag descriptions and whether they are experimental.

    • Added a GraphQL mutation for testing an action. It is still in preview, but it will replace the equivalent REST endpoint soon.

    • Improved error messaging of GraphQL queries and mutations for alerts, scheduled searches and actions in cases where a given repository or view cannot be found.

  • Configuration

    • Adds a new metric for measuring the merge latency, which is defined as the latency between the last mini-segment being written in a sequence with the same merge target, and those mini-segments being merged. The metric name is segment-merge-latency-ms.

    • Detect need for higher autoshard count by monitoring ingest request flow in the cluster. Dynamically increase the number of autoshards for each datasource to keep flow on each resulting shard below approximately 2MB/s. New dynamic configuration for this that sets the target maximum rate of ingest for each shard of a datasource: TargetMaxRateForDatasource. Default value is 2000000 (2 MB).

    • Added a new environment variable GLOB_MATCH_LIMIT which sets the maximum number of rows for csv_file in match(..., file=csv_file, glob=true) function. Previously MAX_STATE_SIZE was used to determine this limit. The default value of this variable is 20000. If you've changed the value of MAX_STATE_SIZE, we recommend that you also change GLOB_MATCH_LIMIT to the same value for a seamless upgrade.

    • Default value of configuration variable S3_ARCHIVING_WORKERCOUNT raised from 1 to (vCPU/4).

    • Added a new dynamic configuration GroupDefaultLimit. This can be done through GraphQL. See Limits & Standards for details. If you've changed the value of MAX_STATE_LIMIT, we recommend that you also change GroupDefaultLimit and GroupMaxLimit to the same value for a seamless upgrade, see groupBy() for details.

    • Introduced new dynamic configuration LiveQueryMemoryLimit. It can be set using GraphQL. See Limits & Standards for details.

    • Introduced new dynamic configuration JoinRowLimit. It can be set using GraphQL and can be used as an alternative to the environment variable MAX_JOIN_LIMIT. If the JoinRowLimit is set, then its value will be used instead of MAX_JOIN_LIMIT. If it is not set, then MAX_JOIN_LIMIT will be used.

    • Introduced new dynamic configuration StateRowLimit. It can be set using GraphQL. See Limits & Standards for details.

    • Improve the error message if Humio is configured to use bucket storage, but the credentials for the bucket are not configured.

    • Change default value for configuration AUTOSHARDING_MAX from 16 to 128.

    • Add environment variable EULA_URL to specificy url for terms and conditions.

    • Added a link to humio-activity repository for debugging IDP configurations to the page for setting up the same.

    • Bucket storage now has support for a new format for the keys (file names) for the files placed in the bucket. When the new format is applied, the listing of files only happens for the prefixes "tmp/" and "globalsnapshots/". This helps products such a "HCP". The new format is applied only to buckets created after the dynamic configuration BucketStorageKeySchemeVersion has been set to "2". Existing cluster can start using the new format for new files by setting this dynamic configuration. The change will take effect after restarting the cluster. When creating a new Humio cluster, the new format is the default. The new format is supported only on Humio version 1.41+.

    • Introduced new dynamic configuration GroupMaxLimit. It can be set using GraphQL. See Limits & Standards for details.

    • Support for KMS on S3 bucket for Bucket Storage. Specify full ARN of the key. The key_id is persisted in the internal BucketEntity so that a later change of the ID of the key to use for uploads will make Humio still refer the old keyID when downloading files uploaded using the previous key. Setting a new value for the target key results in a fresh internal bucket entity to track which files used kms and which did not. For simplicity it is recommended to not mix KMS and non-KMS configurations on the same S3 bucket.

    • New file format for files uploaded to bucket storage that allows files larger than 2GB to be written to bucket storage. This may be turned on by setting the dynamic configuration BucketStorageWriteVersion to 3. When creating a new Humio clusters, the new format is the default. The new format is supported only on Humio version 1.41+.

    • New configurations BUCKET_STORAGE_SSE_COMPATIBLE that makes bucket storage not verify checksums of raw objects after uploading to an S3. This option is turned on automatically is KMS is enabled (see S3_STORAGE_KMS_KEY_ARN) but is available directly here for use with other S3 compatible providers where verifying even content length does not work.

      Mini segments usually get merged if their event timestamps span more than MAX_HOURS_SEGMENT_OPEN. Mini segments created as part of backfilling did not follow this rule, but will now get merged if their ingest timestamps span more than MAX_HOURS_SEGMENT_OPEN.

    • Adds a new logger job that logs the age of an unmerged miniSegment if the age exceeds the threshold set by the env variable MINI_SEGMENT_MAX_MERGE_DELAY_MS_BEFORE_WARNING. The default value of MINI_SEGMENT_MAX_MERGE_DELAY_MS_BEFORE_WARNING is 2 x MAX_HOURS_SEGMENT_OPEN. MAX_HOURS_SEGMENT_OPEN defaults to 24 hours. The error log produced looks like: Oldest unmerged miniSegment is older than the threshold thresholdMs={value} miniSegmentAgeMs={value} segment={value}.

    • Introduced new dynamic configuration QueryMemoryLimit. It can be set using GraphQL. See also LiveQueryMemoryLimit for live queries. For more details, see Limits & Standards.

  • Dashboards and Widgets

    • Applied stylistic changes for the Inspect Panel used in Widget Editor.

    • Dashboards can now be configured to not update after the initial search has completed. This mode is mainly meant to be used when a dashboard is interactive and not for wall-mounted monitors that should update continually. The feature can be accessed from the Dashboard properties panel when a dashboard is put in edit-mode. See Working in Edit Mode.

    • Bar Chart widget:

      • The Y-axis can now start at smaller values than 1 for logarithmic scales, when the data contain small enough values.

      • It now has an Auto setting for the Input Data Format property, see Wide or Long Input Format for details.

      • Now works with bucket query results.

    • Added empty states for all widget types that will be rendered when there are no results.

    • When importing existing dashboard with a static Shared time, recent changes in the time selection would make those dashboards live.

    • Introducing the Heat Map widget that visualizes aggregated data as a colorised grid.

    • The Pie Chart widget now uses the first column for the series as a fall back option.

    • The Dashboard page now displays the current cluster status.

    • Note widget:

      • Default background color is now Auto.

      • Introduced the text color configuration option.

    • Sorting of Pie Chart widget categories, descending by value. Categories grouped as Others will always be last.

    • The widget legend column width is now based on the custom series title (if specified) instead of the original series name.

    • The Normalize option for the World Map widget has been replaced by a third magnitude mode named None, which results in fixed size and opacity for all marks.

    • Table widgets will now break lines for newline characters in columns.

    • Better handling of dashboard connections issues during restarts and upgrades.

    • Single Value widget:

      • Missing buckets are now shown as gaps on the sparkline.

      • Isolated data points are now visualized as dots on the sparkline.

    • Pie Chart widget now uses the first column for the series as a fall back option.

    • Single Value widget new configuration: deprecated field use-colorised-thresholds in favor of color-method.

      Single Value widget Editor: the configuration option Enable Thresholds is being replaced by an option called Method under the Colors section.

  • Log Collector

    • The Log Collector download page has been enabled for on-prem deployments.

  • Functions

    • Added validation to the field and key parameters of the join() function, so empty lists will be rejected with a meaningful error message.

    • The groupBy() function now accepts max as value for the limit parameter, which sets the limit to the largest allowed value (as configured by the dynamic configuration GroupMaxLimit).

    • Improved the phrasing of the warning shown when groupBy() exceeds the max or default limit.

    • Added validation to the field parameter of the kvParse() function, so empty lists will be rejected with a meaningful error message.

  • Other

    • All users will not have access to the audit log or search all view by default anymore. Access can be granted with permissions.

    • Bump the version of the Monaco code editor.

    • Streaming queries that fail to validate now return a message of why validation failed.

    • Fix a bug causing Humio's digest coordinator to allow nodes to take over digest without catching up to the current leader. This could cause the new leader to replay more data from Kafka than necessary.

    • Fixed an issue where query auto-completion sometimes wouldn't show the documentation for the suggested functions.

    • Adds a new metric for the temp disk usage. The metric name is temp-disk-usage-bytes and denotes how many bytes are used.

    • Added a log message with the maximum state size seen by the live part of live queries.

    • Include the requester in logs from QuerySessions when a live query is restarted or cancelled.

    • The audit log system repository on Cloud has been replaced with a view, so that dashboards etc. can be created on top of audit log data.

    • Make BucketStorageUploadJob only log at info level rather than error if a segment upload fails because the segment has been removed from the host. This can happen if node X tries to upload a segment, but node Y beats it to the punch. Node X may then choose to remove its copy before the upload completes.

    • When unregistering a node from a cluster, return a validation error if it is still alive. Hosts should be shut down before attempting to remove them from the cluster. This validation can be skipped using the same accept-data-loss parameter that also disables other validations for the unregistration endpoint.

    • Added detection and handling of all queries being blocked during Humio upgrades.

    • Added a log of the approximate query result size before transmission to the frontend, captured by the approximateResultBeforeSerialization key.

    • Add flag whether a feature is experimental.

    • Added a log line for when a query exceeds its allotted memory quota.

    • The referrer meta tag for Humio has been changed from no-referrer to same-origin.

    • Compute next set of Prometheus metrics only in a single thread concurrently. If more requests arrive, then the next request gets the previous response.

    • Make a number of improvements to the digest partition coordinator. The coordinator now tries harder to avoid assigning digest to nodes that are not caught up on fetching segments from the other nodes. It also does a better job unassigning digest from dead nodes in edge cases.

    • Fix an unhandled IO exception from TempDirUsageJob. The consequence of the uncaught exception was only noise in the error log.

    • Added a new action type that creates a CSV file from the query result and uploads it to Humio to be used with the match() query function. See Action Type: Upload File.

    • Java in the docker images no longer has the cap_net_bind_service capability and thus Humio cannot bind directly to privileged ports when running as a non-root user.

    • Add warning when a multitenancy user is changing data retention on an unlimited repository.

    • Improved performance of NDJSON format in S3 Archiving.

    • Fix a bug that could cause Humio to spuriously log errors warning about segments not being merged for datasources doing backfilling.

    • Humio now logs digest partition assignments regularly. The logs can be found using the query class=*DigestLeadershipLoggerJob*.

    • All feature flags now contains a textual description about what features are hidden behind the flag.

    • Adds a logger job for cluster management stats it log the stats every 2 minutes, which makes them searchable in Humio.

      The logs belong to the class c.h.c.ClusterManagementStatsLoggerJob, logs for all segments contains globalSegmentStats log about singular segments starts with segmentStats.

    • Remove remains of default groups and roles. The concept was replaced with UserRoles.

Fixed in this release

  • Security

    • Update Netty to address CVE-2022-24823.

    • Bump javax.el to address CVE-2021-28170.

    • Update Scala to address CVE-2022-36944.

  • Falcon Data Replicator

    • FDR Ingest will no longer fail on events that are larger than the maximum allowed event size. Instead, such messages will be truncated.

  • UI Changes

    • Prevent the UI showing errors for smaller connection issues while restarting.

    • Websocket connections are now kept open when transitioning pages, and are used more efficiently for syntax highlighting.

    • Fixed an issue where some warnings would show twice.

    • Intermediate network issues are not reported immediately as an error in the UI.

    • Cloud: Updated the layout for license key page.

    • Fix the dropdown menus closing too early on the home page.

    • Fixed a bug where the "=" and "/=" buttons did not appear on cells in the event list where they should.

    • When viewing the events behind e.g. a Time Chart, the events will now only display with the @timestamp and @rawstring columns.

  • GraphQL API

    • Fix the assets GraphQL query in organizations with views that are not 1-to-1 linked.

  • Configuration

    • Fixed a bug that could result in merging small ("undersized") segments even if the resulting segment would then have a wider than desired time span. The goal it to not produce segments that span more than the 10% of the retention setting for time for the repository. If no time-based retention is configured on the repository, then 3 times the value of configuration variable MAX_HOURS_SEGMENT_OPEN is applied as limit. For default settings, that results in 72 hours.

    • Fixed an issue where event forwarding still showed as beta.

    • Fixed an issue where delete events from a mini-segment could result in the merge of those mini-segments into the resulting target segment never got executed.

      Index in block needs reading from blockwriter before adding each item.

      Fixed a bug where the @id field of events in live query were off by one.

  • Dashboards and Widgets

    • Fixed a bug where certain queries would make it seem that all widgets were incompatible, even though the table view still works.

      Importing a dashboard with Shared time enabled and Live disabled would import the dashboard with Live enabled. Likewise, when creating a new dashboard from a template, Live would be on.

    • The Apply Filter button on the dashboard correctly applies the typed filter again.

    • The theme toggle on a shared dashboard was moved to the header panel and no longer overlaps with any widgets.

    • The Time Chart widget regression line is no longer affected by the interpolation setting.

  • Functions

    • Fixed a bug where using eval as an argument to a function would result in a confusing error message.

    • Fixed a bug where ioc:lookup() would sometimes give incorrect results when negated.

    • Revised some of the error messages and warnings regarding join() and selfJoin().

    • Fixed a recent bug which caused the category links from groupBy()-groups to be lost when a subsequent sort() was used, and also made grouping-based charts (bar, pie, heat map) unusable in such cases.

    • Fixed a bug related to query result metadata for some functions when used as the last aggegate function in a query.

    • Fixed a bug where the writeJson() function would write any field starting with a case-insensitive inf or infinity prefix as a null value in the resulting JSON.

  • Other

    • Make streaming queries search segments newest-to-oldest rather than oldest-to-newest. Streaming queries do not ensure the order of exported events anyway, and searching newest-to-oldest is more efficient.

    • Fix a bug where changing a role for a user under a repository would trigger infinite network requests.

    • Centralise decision for names of files in bucket, allow more than one variant.

      Improved hover messages for strings.

    • Fixed an issue where query auto-completion would sometimes delete existing parentheses.

    • If a segment is deleted or otherwise disappears from global while Humio is attempting to upload it to bucket storage, the upload will now be dropped with an info-level log, rather than requeued with an error log.

    • Fixes the placement of a confirmation dialog when attempting to change retention.

    • Humio will now clean up its tmp directories by deleting all "humiotmp" directories in the data directory when terminating gracefully.

      Fix a regression in the launcher script causing JVM_LOG_DIR to not be evaluated relative to the Humio base install path. All paths in the launcher script should now be relative to the base install path, which is the directory containing the bin folder.

      Fix a bug that could cause merge targets to be cached indefinitely if the associated minis had their mergeTarget unset. The effect was a minor memory leak.

      Fix a bug that could cause Humio to attempt to merge mini-segments from one datasource into a segment in another datasource, causing an error to be thrown.

      When configuring thread priorities, Humio will no longer attempt to call the native setpriority function. It will instead only call the Java API for setting thread priority.

    • Fixed an issue for ephemeral disk based installs where segment files could stay longer on local disks than they were required to, in cases where some nodes listed in the cluster were not alive for extended periods of time.

    • Fixed an issue where JSON parsing on ingest and in the query language was inefficient for large JSON objects.

    • Fix performance issue for users with access to many views.

    • Improve file path handling in DiskSpaceJob to eliminate edge cases where the job might not have been able to tell if a file was on primary or secondary storage.

    • Fix type in Unregisters node text on cluster admin UI.

      • Fixed an issue where event forwarder properties were not properly validated.

      • Reduced the timeout used when testing event forwarders in order to get a better error when timeouts happen.

    • Fix a bug that could cause a NumberFormatException to be thrown from ZooKeeperStatsClient.

    • Fixed an issue where some segments could stall the background process implementing event redaction. This could then result in segments not being merged. The visible symptom would be segments with topOffset attribute being -1, and MiniSegmentMergeLatencyLoggerJob logging that some segments are not being merged.

    • Fix a bug causing digesters to continue digesting even if the local disk is full. The digester will now pause digesting and error log if this occurs.

    • Fix response entities not being discarded in error cases for the proxyqueryjobs endpoint, which could cause warnings in the log.

    • Update org.json:json to address a vulnerability that could cause stack overflows.

    • Fix an issue causing the event forwarding feature to incorrectly reject topic names that contained a dash (-).

    • Fix an issue that could rarely cause exceptions to be thrown from Segments.originalBytesWritten, causing noise in the log.

    • Fix an issue causing Humio to create a large number of temporary directories in the data directory.

    • Bump woodstox to address SNYK-JAVA-COMFASTERXMLWOODSTOX-2928754.

    • Fixed an issue where queries could fail when the requests within the cluster were more than 8 MB each.

    • Some errors messages wrongly pointed to the beginning of the query.

    • Kafka upgrades to 3.2.0 in the Docker images and in the Humio dependencies.

    • Fix a regression introduced in 1.46.0 that can cause Humio to fail to properly replay data from Kafka when a node is restarted.

    • Fixed an issue where strings like Nana and Information could be interpreted as NaN (not-a-number) and infinity, respectively.

    • Fixed a bug where multiline comments weren't always highlighted correctly.

Humio Server 1.51.1 LTS (2022-08-29)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.51.1LTS2022-08-29

Cloud

2023-08-31No1.30.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.51.1/server-1.51.1.tar.gz

These notes include entries from the following previous releases: 1.51.0

Bug fix.

Removed

Items that have been removed as of this release.

API

  • The deprecated REST API for actions has been removed, except for the endpoint for testing an action.

  • The deprecated REST API for parsers has been removed.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • Deprecated enabledFeatures query. Use the new featureFlags query instead.

New features and improvements

  • Falcon Data Replicator

    • FDR polling is now turned on by default. Whether FDR polling should be turned on or off on a node can be configured using the ENABLE_FDR_POLLING_ON_NODE configuration variable.

      • If an S3 file is found to be incorrectly formatted during FDR ingest, it will not be ingested completely, but an attempt is made to ingest the remaining S3 files of the SQS message.

      • If an S3 file cannot be found during FDR ingest, it will not be ingested, but an attempt is made to ingest the remaining S3 files of the SQS message.

    • Added environment variable FDR_USE_PROXY which makes the fdr job use the proxy settings specified with: HTTP_PROXY_* environment variables.

  • UI Changes

    • The design of the Time Selector has been updated, and it now features an Apply button on the dashboard page. See Time Interval Settings.

    • Field columns now support multiple formatting options. See Formatting Columns for details.

    • Add missing accessibility features to the login page.

    • In lists of users, with user avatars containing user initials, the current user would sometimes appear to have an opening parenthesis as their last initial.

    • The Live checkbox is now no longer checked automatically when changing the value of the time window in the Time Selector. See Changing Time Interval for details.

    • If Humio fails to start because the cluster is being upgraded, a dedicated message will show when launching the UI.

    • The Save As... button is now always displayed on the Search page, see it described at Saving Searches.

    • Improved keyboard accessibility for creating repositories and views.

    • New styling of errors on search and dashboard pages.

    • Adds an icon and a hint to a disabled side navigation menu item that tells the user the reason for it being disabled.

    • Toggle switches anywhere in the UI can now be accessed using the tab key and can be accessed using the keyboard.

    • When editing an email action in the UI and adding multiple recipients, it is now possible to add a space after the comma in the comma-separated list of recipients.

  • Documentation

    • All documentation links have been updated after the documentation site has been restructured. Please contact support, if you experience any broken links.

  • Automation and Alerts

    • Fixed a bug where an alert with name longer than 50 characters could not be edited.

  • GraphQL API

    • Added preview fields isClusterBeingUpdated and minimumNodeVersion to the GraphQL Cluster object type.

    • Added a new dynamic configuration flag QueryResultRowCountLimit that globally limits how many results (events) a query can return. This flag can be set by administrators through GraphQL. See Limits & Standards for more details.

    • The GQL API mutation updateDashboard has been updated to take a new argument updateFrequency which can currently only be NEVER or REALTIME, which correspond respectively to "dashboard where queries are never updated after first completion" and "dashboard where query results are updated indefinitely".

    • Expose a new GraphQL type with feature flag descriptions and whether they are experimental.

    • Added a GraphQL mutation for testing an action. It is still in preview, but it will replace the equivalent REST endpoint soon.

    • Improved error messaging of GraphQL queries and mutations for alerts, scheduled searches and actions in cases where a given repository or view cannot be found.

  • Configuration

    • Adds a new metric for measuring the merge latency, which is defined as the latency between the last mini-segment being written in a sequence with the same merge target, and those mini-segments being merged. The metric name is segment-merge-latency-ms.

    • Detect need for higher autoshard count by monitoring ingest request flow in the cluster. Dynamically increase the number of autoshards for each datasource to keep flow on each resulting shard below approximately 2MB/s. New dynamic configuration for this that sets the target maximum rate of ingest for each shard of a datasource: TargetMaxRateForDatasource. Default value is 2000000 (2 MB).

    • Added a new environment variable GLOB_MATCH_LIMIT which sets the maximum number of rows for csv_file in match(..., file=csv_file, glob=true) function. Previously MAX_STATE_SIZE was used to determine this limit. The default value of this variable is 20000. If you've changed the value of MAX_STATE_SIZE, we recommend that you also change GLOB_MATCH_LIMIT to the same value for a seamless upgrade.

    • Default value of configuration variable S3_ARCHIVING_WORKERCOUNT raised from 1 to (vCPU/4).

    • Added a new dynamic configuration GroupDefaultLimit. This can be done through GraphQL. See Limits & Standards for details. If you've changed the value of MAX_STATE_LIMIT, we recommend that you also change GroupDefaultLimit and GroupMaxLimit to the same value for a seamless upgrade, see groupBy() for details.

    • Introduced new dynamic configuration LiveQueryMemoryLimit. It can be set using GraphQL. See Limits & Standards for details.

    • Introduced new dynamic configuration JoinRowLimit. It can be set using GraphQL and can be used as an alternative to the environment variable MAX_JOIN_LIMIT. If the JoinRowLimit is set, then its value will be used instead of MAX_JOIN_LIMIT. If it is not set, then MAX_JOIN_LIMIT will be used.

    • Introduced new dynamic configuration StateRowLimit. It can be set using GraphQL. See Limits & Standards for details.

    • Improve the error message if Humio is configured to use bucket storage, but the credentials for the bucket are not configured.

    • Change default value for configuration AUTOSHARDING_MAX from 16 to 128.

    • Add environment variable EULA_URL to specificy url for terms and conditions.

    • Added a link to humio-activity repository for debugging IDP configurations to the page for setting up the same.

    • Bucket storage now has support for a new format for the keys (file names) for the files placed in the bucket. When the new format is applied, the listing of files only happens for the prefixes "tmp/" and "globalsnapshots/". This helps products such a "HCP". The new format is applied only to buckets created after the dynamic configuration BucketStorageKeySchemeVersion has been set to "2". Existing cluster can start using the new format for new files by setting this dynamic configuration. The change will take effect after restarting the cluster. When creating a new Humio cluster, the new format is the default. The new format is supported only on Humio version 1.41+.

    • Introduced new dynamic configuration GroupMaxLimit. It can be set using GraphQL. See Limits & Standards for details.

    • Support for KMS on S3 bucket for Bucket Storage. Specify full ARN of the key. The key_id is persisted in the internal BucketEntity so that a later change of the ID of the key to use for uploads will make Humio still refer the old keyID when downloading files uploaded using the previous key. Setting a new value for the target key results in a fresh internal bucket entity to track which files used kms and which did not. For simplicity it is recommended to not mix KMS and non-KMS configurations on the same S3 bucket.

    • New file format for files uploaded to bucket storage that allows files larger than 2GB to be written to bucket storage. This may be turned on by setting the dynamic configuration BucketStorageWriteVersion to 3. When creating a new Humio clusters, the new format is the default. The new format is supported only on Humio version 1.41+.

    • New configurations BUCKET_STORAGE_SSE_COMPATIBLE that makes bucket storage not verify checksums of raw objects after uploading to an S3. This option is turned on automatically is KMS is enabled (see S3_STORAGE_KMS_KEY_ARN) but is available directly here for use with other S3 compatible providers where verifying even content length does not work.

      Mini segments usually get merged if their event timestamps span more than MAX_HOURS_SEGMENT_OPEN. Mini segments created as part of backfilling did not follow this rule, but will now get merged if their ingest timestamps span more than MAX_HOURS_SEGMENT_OPEN.

    • Adds a new logger job that logs the age of an unmerged miniSegment if the age exceeds the threshold set by the env variable MINI_SEGMENT_MAX_MERGE_DELAY_MS_BEFORE_WARNING. The default value of MINI_SEGMENT_MAX_MERGE_DELAY_MS_BEFORE_WARNING is 2 x MAX_HOURS_SEGMENT_OPEN. MAX_HOURS_SEGMENT_OPEN defaults to 24 hours. The error log produced looks like: Oldest unmerged miniSegment is older than the threshold thresholdMs={value} miniSegmentAgeMs={value} segment={value}.

    • Introduced new dynamic configuration QueryMemoryLimit. It can be set using GraphQL. See also LiveQueryMemoryLimit for live queries. For more details, see Limits & Standards.

  • Dashboards and Widgets

    • Applied stylistic changes for the Inspect Panel used in Widget Editor.

    • Dashboards can now be configured to not update after the initial search has completed. This mode is mainly meant to be used when a dashboard is interactive and not for wall-mounted monitors that should update continually. The feature can be accessed from the Dashboard properties panel when a dashboard is put in edit-mode. See Working in Edit Mode.

    • Bar Chart widget:

      • The Y-axis can now start at smaller values than 1 for logarithmic scales, when the data contain small enough values.

      • It now has an Auto setting for the Input Data Format property, see Wide or Long Input Format for details.

      • Now works with bucket query results.

    • Added empty states for all widget types that will be rendered when there are no results.

    • When importing existing dashboard with a static Shared time, recent changes in the time selection would make those dashboards live.

    • Introducing the Heat Map widget that visualizes aggregated data as a colorised grid.

    • The Pie Chart widget now uses the first column for the series as a fall back option.

    • The Dashboard page now displays the current cluster status.

    • Note widget:

      • Default background color is now Auto.

      • Introduced the text color configuration option.

    • Sorting of Pie Chart widget categories, descending by value. Categories grouped as Others will always be last.

    • The widget legend column width is now based on the custom series title (if specified) instead of the original series name.

    • The Normalize option for the World Map widget has been replaced by a third magnitude mode named None, which results in fixed size and opacity for all marks.

    • Table widgets will now break lines for newline characters in columns.

    • Better handling of dashboard connections issues during restarts and upgrades.

    • Single Value widget:

      • Missing buckets are now shown as gaps on the sparkline.

      • Isolated data points are now visualized as dots on the sparkline.

    • Pie Chart widget now uses the first column for the series as a fall back option.

    • Single Value widget new configuration: deprecated field use-colorised-thresholds in favor of color-method.

      Single Value widget Editor: the configuration option Enable Thresholds is being replaced by an option called Method under the Colors section.

  • Log Collector

    • The Log Collector download page has been enabled for on-prem deployments.

  • Functions

    • Added validation to the field and key parameters of the join() function, so empty lists will be rejected with a meaningful error message.

    • The groupBy() function now accepts max as value for the limit parameter, which sets the limit to the largest allowed value (as configured by the dynamic configuration GroupMaxLimit).

    • Improved the phrasing of the warning shown when groupBy() exceeds the max or default limit.

    • Added validation to the field parameter of the kvParse() function, so empty lists will be rejected with a meaningful error message.

  • Other

    • All users will not have access to the audit log or search all view by default anymore. Access can be granted with permissions.

    • Bump the version of the Monaco code editor.

    • Streaming queries that fail to validate now return a message of why validation failed.

    • Fix a bug causing Humio's digest coordinator to allow nodes to take over digest without catching up to the current leader. This could cause the new leader to replay more data from Kafka than necessary.

    • Fixed an issue where query auto-completion sometimes wouldn't show the documentation for the suggested functions.

    • Adds a new metric for the temp disk usage. The metric name is temp-disk-usage-bytes and denotes how many bytes are used.

    • Added a log message with the maximum state size seen by the live part of live queries.

    • Include the requester in logs from QuerySessions when a live query is restarted or cancelled.

    • The audit log system repository on Cloud has been replaced with a view, so that dashboards etc. can be created on top of audit log data.

    • Make BucketStorageUploadJob only log at info level rather than error if a segment upload fails because the segment has been removed from the host. This can happen if node X tries to upload a segment, but node Y beats it to the punch. Node X may then choose to remove its copy before the upload completes.

    • When unregistering a node from a cluster, return a validation error if it is still alive. Hosts should be shut down before attempting to remove them from the cluster. This validation can be skipped using the same accept-data-loss parameter that also disables other validations for the unregistration endpoint.

    • Added detection and handling of all queries being blocked during Humio upgrades.

    • Added a log of the approximate query result size before transmission to the frontend, captured by the approximateResultBeforeSerialization key.

    • Add flag whether a feature is experimental.

    • Added a log line for when a query exceeds its allotted memory quota.

    • The referrer meta tag for Humio has been changed from no-referrer to same-origin.

    • Compute next set of Prometheus metrics only in a single thread concurrently. If more requests arrive, then the next request gets the previous response.

    • Make a number of improvements to the digest partition coordinator. The coordinator now tries harder to avoid assigning digest to nodes that are not caught up on fetching segments from the other nodes. It also does a better job unassigning digest from dead nodes in edge cases.

    • Fix an unhandled IO exception from TempDirUsageJob. The consequence of the uncaught exception was only noise in the error log.

    • Added a new action type that creates a CSV file from the query result and uploads it to Humio to be used with the match() query function. See Action Type: Upload File.

    • Java in the docker images no longer has the cap_net_bind_service capability and thus Humio cannot bind directly to privileged ports when running as a non-root user.

    • Add warning when a multitenancy user is changing data retention on an unlimited repository.

    • Improved performance of NDJSON format in S3 Archiving.

    • Fix a bug that could cause Humio to spuriously log errors warning about segments not being merged for datasources doing backfilling.

    • Humio now logs digest partition assignments regularly. The logs can be found using the query class=*DigestLeadershipLoggerJob*.

    • All feature flags now contains a textual description about what features are hidden behind the flag.

    • Adds a logger job for cluster management stats it log the stats every 2 minutes, which makes them searchable in Humio.

      The logs belong to the class c.h.c.ClusterManagementStatsLoggerJob, logs for all segments contains globalSegmentStats log about singular segments starts with segmentStats.

    • Remove remains of default groups and roles. The concept was replaced with UserRoles.

Fixed in this release

  • Security

    • Update Netty to address CVE-2022-24823.

    • Bump javax.el to address CVE-2021-28170.

  • Falcon Data Replicator

    • FDR Ingest will no longer fail on events that are larger than the maximum allowed event size. Instead, such messages will be truncated.

  • UI Changes

    • Prevent the UI showing errors for smaller connection issues while restarting.

    • Websocket connections are now kept open when transitioning pages, and are used more efficiently for syntax highlighting.

    • Fixed an issue where some warnings would show twice.

    • Intermediate network issues are not reported immediately as an error in the UI.

    • Cloud: Updated the layout for license key page.

    • Fix the dropdown menus closing too early on the home page.

    • Fixed a bug where the "=" and "/=" buttons did not appear on cells in the event list where they should.

    • When viewing the events behind e.g. a Time Chart, the events will now only display with the @timestamp and @rawstring columns.

  • GraphQL API

    • Fix the assets GraphQL query in organizations with views that are not 1-to-1 linked.

  • Configuration

    • Fixed a bug that could result in merging small ("undersized") segments even if the resulting segment would then have a wider than desired time span. The goal it to not produce segments that span more than the 10% of the retention setting for time for the repository. If no time-based retention is configured on the repository, then 3 times the value of configuration variable MAX_HOURS_SEGMENT_OPEN is applied as limit. For default settings, that results in 72 hours.

    • Fixed an issue where event forwarding still showed as beta.

    • Fixed an issue where delete events from a mini-segment could result in the merge of those mini-segments into the resulting target segment never got executed.

      Index in block needs reading from blockwriter before adding each item.

      Fixed a bug where the @id field of events in live query were off by one.

  • Dashboards and Widgets

    • The Apply Filter button on the dashboard correctly applies the typed filter again.

    • The theme toggle on a shared dashboard was moved to the header panel and no longer overlaps with any widgets.

    • The Time Chart widget regression line is no longer affected by the interpolation setting.

  • Functions

    • Fixed a bug where using eval as an argument to a function would result in a confusing error message.

    • Fixed a bug where ioc:lookup() would sometimes give incorrect results when negated.

    • Revised some of the error messages and warnings regarding join() and selfJoin().

    • Fixed a recent bug which caused the category links from groupBy()-groups to be lost when a subsequent sort() was used, and also made grouping-based charts (bar, pie, heat map) unusable in such cases.

    • Fixed a bug related to query result metadata for some functions when used as the last aggegate function in a query.

    • Fixed a bug where the writeJson() function would write any field starting with a case-insensitive inf or infinity prefix as a null value in the resulting JSON.

  • Other

    • Make streaming queries search segments newest-to-oldest rather than oldest-to-newest. Streaming queries do not ensure the order of exported events anyway, and searching newest-to-oldest is more efficient.

    • Fix a bug where changing a role for a user under a repository would trigger infinite network requests.

    • Centralise decision for names of files in bucket, allow more than one variant.

      Improved hover messages for strings.

    • Fixed an issue where query auto-completion would sometimes delete existing parentheses.

    • If a segment is deleted or otherwise disappears from global while Humio is attempting to upload it to bucket storage, the upload will now be dropped with an info-level log, rather than requeued with an error log.

    • Fixes the placement of a confirmation dialog when attempting to change retention.

    • Humio will now clean up its tmp directories by deleting all "humiotmp" directories in the data directory when terminating gracefully.

      Fix a regression in the launcher script causing JVM_LOG_DIR to not be evaluated relative to the Humio base install path. All paths in the launcher script should now be relative to the base install path, which is the directory containing the bin folder.

      Fix a bug that could cause merge targets to be cached indefinitely if the associated minis had their mergeTarget unset. The effect was a minor memory leak.

      Fix a bug that could cause Humio to attempt to merge mini-segments from one datasource into a segment in another datasource, causing an error to be thrown.

      When configuring thread priorities, Humio will no longer attempt to call the native setpriority function. It will instead only call the Java API for setting thread priority.

    • Fixed an issue for ephemeral disk based installs where segment files could stay longer on local disks than they were required to, in cases where some nodes listed in the cluster were not alive for extended periods of time.

    • Fixed an issue where JSON parsing on ingest and in the query language was inefficient for large JSON objects.

    • Fix performance issue for users with access to many views.

    • Improve file path handling in DiskSpaceJob to eliminate edge cases where the job might not have been able to tell if a file was on primary or secondary storage.

    • Fix type in Unregisters node text on cluster admin UI.

      • Fixed an issue where event forwarder properties were not properly validated.

      • Reduced the timeout used when testing event forwarders in order to get a better error when timeouts happen.

    • Fix a bug that could cause a NumberFormatException to be thrown from ZooKeeperStatsClient.

    • Fix a bug causing digesters to continue digesting even if the local disk is full. The digester will now pause digesting and error log if this occurs.

    • Fix response entities not being discarded in error cases for the proxyqueryjobs endpoint, which could cause warnings in the log.

    • Update org.json:json to address a vulnerability that could cause stack overflows.

    • Fix an issue causing the event forwarding feature to incorrectly reject topic names that contained a dash (-).

    • Fix an issue that could rarely cause exceptions to be thrown from Segments.originalBytesWritten, causing noise in the log.

    • Fix an issue causing Humio to create a large number of temporary directories in the data directory.

    • Bump woodstox to address SNYK-JAVA-COMFASTERXMLWOODSTOX-2928754.

    • Fixed an issue where queries could fail when the requests within the cluster were more than 8 MB each.

    • Some errors messages wrongly pointed to the beginning of the query.

    • Kafka upgrades to 3.2.0 in the Docker images and in the Humio dependencies.

    • Fixed an issue where strings like Nana and Information could be interpreted as NaN (not-a-number) and infinity, respectively.

    • Fixed a bug where multiline comments weren't always highlighted correctly.

Humio Server 1.51.0 LTS (2022-08-15)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.51.0LTS2022-08-15

Cloud

2023-08-31No1.30.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.51.0/server-1.51.0.tar.gz

Bug fixes and updates.

Removed

Items that have been removed as of this release.

API

  • The deprecated REST API for actions has been removed, except for the endpoint for testing an action.

  • The deprecated REST API for parsers has been removed.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • Deprecated enabledFeatures query. Use the new featureFlags query instead.

New features and improvements

  • Falcon Data Replicator

    • FDR polling is now turned on by default. Whether FDR polling should be turned on or off on a node can be configured using the ENABLE_FDR_POLLING_ON_NODE configuration variable.

      • If an S3 file is found to be incorrectly formatted during FDR ingest, it will not be ingested completely, but an attempt is made to ingest the remaining S3 files of the SQS message.

      • If an S3 file cannot be found during FDR ingest, it will not be ingested, but an attempt is made to ingest the remaining S3 files of the SQS message.

    • Added environment variable FDR_USE_PROXY which makes the fdr job use the proxy settings specified with: HTTP_PROXY_* environment variables.

  • UI Changes

    • The design of the Time Selector has been updated, and it now features an Apply button on the dashboard page. See Time Interval Settings.

    • Field columns now support multiple formatting options. See Formatting Columns for details.

    • Add missing accessibility features to the login page.

    • In lists of users, with user avatars containing user initials, the current user would sometimes appear to have an opening parenthesis as their last initial.

    • The Live checkbox is now no longer checked automatically when changing the value of the time window in the Time Selector. See Changing Time Interval for details.

    • If Humio fails to start because the cluster is being upgraded, a dedicated message will show when launching the UI.

    • The Save As... button is now always displayed on the Search page, see it described at Saving Searches.

    • Improved keyboard accessibility for creating repositories and views.

    • New styling of errors on search and dashboard pages.

    • Adds an icon and a hint to a disabled side navigation menu item that tells the user the reason for it being disabled.

    • Toggle switches anywhere in the UI can now be accessed using the tab key and can be accessed using the keyboard.

    • When editing an email action in the UI and adding multiple recipients, it is now possible to add a space after the comma in the comma-separated list of recipients.

  • Documentation

    • All documentation links have been updated after the documentation site has been restructured. Please contact support, if you experience any broken links.

  • Automation and Alerts

    • Fixed a bug where an alert with name longer than 50 characters could not be edited.

  • GraphQL API

    • Added preview fields isClusterBeingUpdated and minimumNodeVersion to the GraphQL Cluster object type.

    • Added a new dynamic configuration flag QueryResultRowCountLimit that globally limits how many results (events) a query can return. This flag can be set by administrators through GraphQL. See Limits & Standards for more details.

    • The GQL API mutation updateDashboard has been updated to take a new argument updateFrequency which can currently only be NEVER or REALTIME, which correspond respectively to "dashboard where queries are never updated after first completion" and "dashboard where query results are updated indefinitely".

    • Expose a new GraphQL type with feature flag descriptions and whether they are experimental.

    • Added a GraphQL mutation for testing an action. It is still in preview, but it will replace the equivalent REST endpoint soon.

    • Improved error messaging of GraphQL queries and mutations for alerts, scheduled searches and actions in cases where a given repository or view cannot be found.

  • Configuration

    • Adds a new metric for measuring the merge latency, which is defined as the latency between the last mini-segment being written in a sequence with the same merge target, and those mini-segments being merged. The metric name is segment-merge-latency-ms.

    • Detect need for higher autoshard count by monitoring ingest request flow in the cluster. Dynamically increase the number of autoshards for each datasource to keep flow on each resulting shard below approximately 2MB/s. New dynamic configuration for this that sets the target maximum rate of ingest for each shard of a datasource: TargetMaxRateForDatasource. Default value is 2000000 (2 MB).

    • Added a new environment variable GLOB_MATCH_LIMIT which sets the maximum number of rows for csv_file in match(..., file=csv_file, glob=true) function. Previously MAX_STATE_SIZE was used to determine this limit. The default value of this variable is 20000. If you've changed the value of MAX_STATE_SIZE, we recommend that you also change GLOB_MATCH_LIMIT to the same value for a seamless upgrade.

    • Default value of configuration variable S3_ARCHIVING_WORKERCOUNT raised from 1 to (vCPU/4).

    • Added a new dynamic configuration GroupDefaultLimit. This can be done through GraphQL. See Limits & Standards for details. If you've changed the value of MAX_STATE_LIMIT, we recommend that you also change GroupDefaultLimit and GroupMaxLimit to the same value for a seamless upgrade, see groupBy() for details.

    • Introduced new dynamic configuration LiveQueryMemoryLimit. It can be set using GraphQL. See Limits & Standards for details.

    • Introduced new dynamic configuration JoinRowLimit. It can be set using GraphQL and can be used as an alternative to the environment variable MAX_JOIN_LIMIT. If the JoinRowLimit is set, then its value will be used instead of MAX_JOIN_LIMIT. If it is not set, then MAX_JOIN_LIMIT will be used.

    • Introduced new dynamic configuration StateRowLimit. It can be set using GraphQL. See Limits & Standards for details.

    • Improve the error message if Humio is configured to use bucket storage, but the credentials for the bucket are not configured.

    • Change default value for configuration AUTOSHARDING_MAX from 16 to 128.

    • Add environment variable EULA_URL to specificy url for terms and conditions.

    • Added a link to humio-activity repository for debugging IDP configurations to the page for setting up the same.

    • Bucket storage now has support for a new format for the keys (file names) for the files placed in the bucket. When the new format is applied, the listing of files only happens for the prefixes "tmp/" and "globalsnapshots/". This helps products such a "HCP". The new format is applied only to buckets created after the dynamic configuration BucketStorageKeySchemeVersion has been set to "2". Existing cluster can start using the new format for new files by setting this dynamic configuration. The change will take effect after restarting the cluster. When creating a new Humio cluster, the new format is the default. The new format is supported only on Humio version 1.41+.

    • Introduced new dynamic configuration GroupMaxLimit. It can be set using GraphQL. See Limits & Standards for details.

    • Support for KMS on S3 bucket for Bucket Storage. Specify full ARN of the key. The key_id is persisted in the internal BucketEntity so that a later change of the ID of the key to use for uploads will make Humio still refer the old keyID when downloading files uploaded using the previous key. Setting a new value for the target key results in a fresh internal bucket entity to track which files used kms and which did not. For simplicity it is recommended to not mix KMS and non-KMS configurations on the same S3 bucket.

    • New file format for files uploaded to bucket storage that allows files larger than 2GB to be written to bucket storage. This may be turned on by setting the dynamic configuration BucketStorageWriteVersion to 3. When creating a new Humio clusters, the new format is the default. The new format is supported only on Humio version 1.41+.

    • New configurations BUCKET_STORAGE_SSE_COMPATIBLE that makes bucket storage not verify checksums of raw objects after uploading to an S3. This option is turned on automatically is KMS is enabled (see S3_STORAGE_KMS_KEY_ARN) but is available directly here for use with other S3 compatible providers where verifying even content length does not work.

      Mini segments usually get merged if their event timestamps span more than MAX_HOURS_SEGMENT_OPEN. Mini segments created as part of backfilling did not follow this rule, but will now get merged if their ingest timestamps span more than MAX_HOURS_SEGMENT_OPEN.

    • Adds a new logger job that logs the age of an unmerged miniSegment if the age exceeds the threshold set by the env variable MINI_SEGMENT_MAX_MERGE_DELAY_MS_BEFORE_WARNING. The default value of MINI_SEGMENT_MAX_MERGE_DELAY_MS_BEFORE_WARNING is 2 x MAX_HOURS_SEGMENT_OPEN. MAX_HOURS_SEGMENT_OPEN defaults to 24 hours. The error log produced looks like: Oldest unmerged miniSegment is older than the threshold thresholdMs={value} miniSegmentAgeMs={value} segment={value}.

    • Introduced new dynamic configuration QueryMemoryLimit. It can be set using GraphQL. See also LiveQueryMemoryLimit for live queries. For more details, see Limits & Standards.

  • Dashboards and Widgets

    • Applied stylistic changes for the Inspect Panel used in Widget Editor.

    • Dashboards can now be configured to not update after the initial search has completed. This mode is mainly meant to be used when a dashboard is interactive and not for wall-mounted monitors that should update continually. The feature can be accessed from the Dashboard properties panel when a dashboard is put in edit-mode. See Working in Edit Mode.

    • Bar Chart widget:

      • The Y-axis can now start at smaller values than 1 for logarithmic scales, when the data contain small enough values.

      • It now has an Auto setting for the Input Data Format property, see Wide or Long Input Format for details.

      • Now works with bucket query results.

    • Added empty states for all widget types that will be rendered when there are no results.

    • When importing existing dashboard with a static Shared time, recent changes in the time selection would make those dashboards live.

    • Introducing the Heat Map widget that visualizes aggregated data as a colorised grid.

    • The Pie Chart widget now uses the first column for the series as a fall back option.

    • The Dashboard page now displays the current cluster status.

    • Note widget:

      • Default background color is now Auto.

      • Introduced the text color configuration option.

    • Sorting of Pie Chart widget categories, descending by value. Categories grouped as Others will always be last.

    • The widget legend column width is now based on the custom series title (if specified) instead of the original series name.

    • The Normalize option for the World Map widget has been replaced by a third magnitude mode named None, which results in fixed size and opacity for all marks.

    • Table widgets will now break lines for newline characters in columns.

    • Better handling of dashboard connections issues during restarts and upgrades.

    • Single Value widget:

      • Missing buckets are now shown as gaps on the sparkline.

      • Isolated data points are now visualized as dots on the sparkline.

    • Pie Chart widget now uses the first column for the series as a fall back option.

    • Single Value widget new configuration: deprecated field use-colorised-thresholds in favor of color-method.

      Single Value widget Editor: the configuration option Enable Thresholds is being replaced by an option called Method under the Colors section.

  • Log Collector

    • The Log Collector download page has been enabled for on-prem deployments.

  • Functions

    • Added validation to the field and key parameters of the join() function, so empty lists will be rejected with a meaningful error message.

    • The groupBy() function now accepts max as value for the limit parameter, which sets the limit to the largest allowed value (as configured by the dynamic configuration GroupMaxLimit).

    • Improved the phrasing of the warning shown when groupBy() exceeds the max or default limit.

    • Added validation to the field parameter of the kvParse() function, so empty lists will be rejected with a meaningful error message.

  • Other

    • All users will not have access to the audit log or search all view by default anymore. Access can be granted with permissions.

    • Bump the version of the Monaco code editor.

    • Streaming queries that fail to validate now return a message of why validation failed.

    • Fix a bug causing Humio's digest coordinator to allow nodes to take over digest without catching up to the current leader. This could cause the new leader to replay more data from Kafka than necessary.

    • Fixed an issue where query auto-completion sometimes wouldn't show the documentation for the suggested functions.

    • Adds a new metric for the temp disk usage. The metric name is temp-disk-usage-bytes and denotes how many bytes are used.

    • Added a log message with the maximum state size seen by the live part of live queries.

    • Include the requester in logs from QuerySessions when a live query is restarted or cancelled.

    • The audit log system repository on Cloud has been replaced with a view, so that dashboards etc. can be created on top of audit log data.

    • Make BucketStorageUploadJob only log at info level rather than error if a segment upload fails because the segment has been removed from the host. This can happen if node X tries to upload a segment, but node Y beats it to the punch. Node X may then choose to remove its copy before the upload completes.

    • When unregistering a node from a cluster, return a validation error if it is still alive. Hosts should be shut down before attempting to remove them from the cluster. This validation can be skipped using the same accept-data-loss parameter that also disables other validations for the unregistration endpoint.

    • Added detection and handling of all queries being blocked during Humio upgrades.

    • Added a log of the approximate query result size before transmission to the frontend, captured by the approximateResultBeforeSerialization key.

    • Add flag whether a feature is experimental.

    • Added a log line for when a query exceeds its allotted memory quota.

    • The referrer meta tag for Humio has been changed from no-referrer to same-origin.

    • Compute next set of Prometheus metrics only in a single thread concurrently. If more requests arrive, then the next request gets the previous response.

    • Make a number of improvements to the digest partition coordinator. The coordinator now tries harder to avoid assigning digest to nodes that are not caught up on fetching segments from the other nodes. It also does a better job unassigning digest from dead nodes in edge cases.

    • Fix an unhandled IO exception from TempDirUsageJob. The consequence of the uncaught exception was only noise in the error log.

    • Added a new action type that creates a CSV file from the query result and uploads it to Humio to be used with the match() query function. See Action Type: Upload File.

    • Java in the docker images no longer has the cap_net_bind_service capability and thus Humio cannot bind directly to privileged ports when running as a non-root user.

    • Add warning when a multitenancy user is changing data retention on an unlimited repository.

    • Improved performance of NDJSON format in S3 Archiving.

    • Fix a bug that could cause Humio to spuriously log errors warning about segments not being merged for datasources doing backfilling.

    • Humio now logs digest partition assignments regularly. The logs can be found using the query class=*DigestLeadershipLoggerJob*.

    • All feature flags now contains a textual description about what features are hidden behind the flag.

    • Adds a logger job for cluster management stats it log the stats every 2 minutes, which makes them searchable in Humio.

      The logs belong to the class c.h.c.ClusterManagementStatsLoggerJob, logs for all segments contains globalSegmentStats log about singular segments starts with segmentStats.

    • Remove remains of default groups and roles. The concept was replaced with UserRoles.

Fixed in this release

  • Security

    • Update Netty to address CVE-2022-24823.

    • Bump javax.el to address CVE-2021-28170.

  • Falcon Data Replicator

    • FDR Ingest will no longer fail on events that are larger than the maximum allowed event size. Instead, such messages will be truncated.

  • UI Changes

    • Prevent the UI showing errors for smaller connection issues while restarting.

    • Websocket connections are now kept open when transitioning pages, and are used more efficiently for syntax highlighting.

    • Fixed an issue where some warnings would show twice.

    • Intermediate network issues are not reported immediately as an error in the UI.

    • Cloud: Updated the layout for license key page.

    • Fix the dropdown menus closing too early on the home page.

    • Fixed a bug where the "=" and "/=" buttons did not appear on cells in the event list where they should.

    • When viewing the events behind e.g. a Time Chart, the events will now only display with the @timestamp and @rawstring columns.

  • GraphQL API

    • Fix the assets GraphQL query in organizations with views that are not 1-to-1 linked.

  • Configuration

    • Fixed a bug that could result in merging small ("undersized") segments even if the resulting segment would then have a wider than desired time span. The goal it to not produce segments that span more than the 10% of the retention setting for time for the repository. If no time-based retention is configured on the repository, then 3 times the value of configuration variable MAX_HOURS_SEGMENT_OPEN is applied as limit. For default settings, that results in 72 hours.

    • Fixed an issue where event forwarding still showed as beta.

    • Fixed an issue where delete events from a mini-segment could result in the merge of those mini-segments into the resulting target segment never got executed.

      Index in block needs reading from blockwriter before adding each item.

      Fixed a bug where the @id field of events in live query were off by one.

  • Dashboards and Widgets

    • The theme toggle on a shared dashboard was moved to the header panel and no longer overlaps with any widgets.

    • The Time Chart widget regression line is no longer affected by the interpolation setting.

  • Functions

    • Fixed a bug where using eval as an argument to a function would result in a confusing error message.

    • Fixed a bug where ioc:lookup() would sometimes give incorrect results when negated.

    • Revised some of the error messages and warnings regarding join() and selfJoin().

    • Fixed a bug where the writeJson() function would write any field starting with a case-insensitive inf or infinity prefix as a null value in the resulting JSON.

  • Other

    • Make streaming queries search segments newest-to-oldest rather than oldest-to-newest. Streaming queries do not ensure the order of exported events anyway, and searching newest-to-oldest is more efficient.

    • Fix a bug where changing a role for a user under a repository would trigger infinite network requests.

    • Centralise decision for names of files in bucket, allow more than one variant.

      Improved hover messages for strings.

    • Fixed an issue where query auto-completion would sometimes delete existing parentheses.

    • If a segment is deleted or otherwise disappears from global while Humio is attempting to upload it to bucket storage, the upload will now be dropped with an info-level log, rather than requeued with an error log.

    • Fixes the placement of a confirmation dialog when attempting to change retention.

    • Humio will now clean up its tmp directories by deleting all "humiotmp" directories in the data directory when terminating gracefully.

      Fix a regression in the launcher script causing JVM_LOG_DIR to not be evaluated relative to the Humio base install path. All paths in the launcher script should now be relative to the base install path, which is the directory containing the bin folder.

      Fix a bug that could cause merge targets to be cached indefinitely if the associated minis had their mergeTarget unset. The effect was a minor memory leak.

      Fix a bug that could cause Humio to attempt to merge mini-segments from one datasource into a segment in another datasource, causing an error to be thrown.

      When configuring thread priorities, Humio will no longer attempt to call the native setpriority function. It will instead only call the Java API for setting thread priority.

    • Fixed an issue where JSON parsing on ingest and in the query language was inefficient for large JSON objects.

    • Fix performance issue for users with access to many views.

    • Improve file path handling in DiskSpaceJob to eliminate edge cases where the job might not have been able to tell if a file was on primary or secondary storage.

    • Fix type in Unregisters node text on cluster admin UI.

      • Fixed an issue where event forwarder properties were not properly validated.

      • Reduced the timeout used when testing event forwarders in order to get a better error when timeouts happen.

    • Fix a bug that could cause a NumberFormatException to be thrown from ZooKeeperStatsClient.

    • Fix a bug causing digesters to continue digesting even if the local disk is full. The digester will now pause digesting and error log if this occurs.

    • Fix response entities not being discarded in error cases for the proxyqueryjobs endpoint, which could cause warnings in the log.

    • Update org.json:json to address a vulnerability that could cause stack overflows.

    • Fix an issue causing the event forwarding feature to incorrectly reject topic names that contained a dash (-).

    • Fix an issue that could rarely cause exceptions to be thrown from Segments.originalBytesWritten, causing noise in the log.

    • Fix an issue causing Humio to create a large number of temporary directories in the data directory.

    • Bump woodstox to address SNYK-JAVA-COMFASTERXMLWOODSTOX-2928754.

    • Some errors messages wrongly pointed to the beginning of the query.

    • Kafka upgrades to 3.2.0 in the Docker images and in the Humio dependencies.

    • Fixed an issue where strings like Nana and Information could be interpreted as NaN (not-a-number) and infinity, respectively.

    • Fixed a bug where multiline comments weren't always highlighted correctly.

Humio Server 1.50.0 GA (2022-08-02)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.50.0GA2022-08-02

Cloud

2023-08-31No1.30.0No

Available for download two days after release.

Bug fixes and an updated dependency, released to cloud only.

New features and improvements

  • UI Changes

    • The design of the Time Selector has been updated, and it now features an Apply button on the dashboard page. See Time Interval Settings.

    • Adds an icon and a hint to a disabled side navigation menu item that tells the user the reason for it being disabled.

    • When editing an email action in the UI and adding multiple recipients, it is now possible to add a space after the comma in the comma-separated list of recipients.

  • Documentation

    • All documentation links have been updated after the documentation site has been restructured. Please contact support, if you experience any broken links.

  • GraphQL API

    • The GQL API mutation updateDashboard has been updated to take a new argument updateFrequency which can currently only be NEVER or REALTIME, which correspond respectively to "dashboard where queries are never updated after first completion" and "dashboard where query results are updated indefinitely".

  • Dashboards and Widgets

    • Dashboards can now be configured to not update after the initial search has completed. This mode is mainly meant to be used when a dashboard is interactive and not for wall-mounted monitors that should update continually. The feature can be accessed from the Dashboard properties panel when a dashboard is put in edit-mode. See Working in Edit Mode.

  • Functions

    • Added validation to the field parameter of the top() function, so empty lists will be rejected with a meaningful error message.

    • Added validation to the field and key parameters of the join() function, so empty lists will be rejected with a meaningful error message.

    • Improved the phrasing of the warning shown when groupBy() exceeds the max or default limit.

    • Added validation to the field parameter of the kvParse() function, so empty lists will be rejected with a meaningful error message.

  • Other

    • Streaming queries that fail to validate now return a message of why validation failed.

    • Fixed an issue where query auto-completion sometimes wouldn't show the documentation for the suggested functions.

    • Added a new action type that creates a CSV file from the query result and uploads it to Humio to be used with the match() query function. See Action Type: Upload File.

    • Humio now logs digest partition assignments regularly. The logs can be found using the query class=*DigestLeadershipLoggerJob*.

Fixed in this release

  • GraphQL API

    • Fix the assets GraphQL query in organizations with views that are not 1-to-1 linked.

  • Configuration

    • Fixed an issue where validation of +/- Infinity as integer arguments would crash.

    • Fixed an issue where event forwarding still showed as beta.

  • Functions

    • Fixed an issue where join() would not produce the correct results when mode=left was set.

  • Other

    • Fixed an issue where query auto-completion would sometimes delete existing parentheses.

    • Fixed an issue where JSON parsing on ingest and in the query language was inefficient for large JSON objects.

    • Fix performance issue for users with access to many views.

    • Fix an issue that could rarely cause exceptions to be thrown from Segments.originalBytesWritten, causing noise in the log.

    • Fix an issue causing Humio to create a large number of temporary directories in the data directory.

Humio Server 1.49.1 GA (2022-07-26)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.49.1GA2022-07-26

Cloud

2023-08-31No1.30.0No

Available for download two days after release.

Bug fixes and an updated dependency, released to cloud only.

Removed

Items that have been removed as of this release.

API

  • The deprecated REST API for parsers has been removed.

New features and improvements

  • UI Changes

    • The Save As... button is now always displayed on the Search page, see it described at Saving Searches.

  • Automation and Alerts

    • Fixed a bug where an alert with name longer than 50 characters could not be edited.

  • Functions

    • The groupBy() function now accepts max as value for the limit parameter, which sets the limit to the largest allowed value (as configured by the dynamic configuration GroupMaxLimit).

  • Other

    • Make BucketStorageUploadJob only log at info level rather than error if a segment upload fails because the segment has been removed from the host. This can happen if node X tries to upload a segment, but node Y beats it to the punch. Node X may then choose to remove its copy before the upload completes.

    • Fix an unhandled IO exception from TempDirUsageJob. The consequence of the uncaught exception was only noise in the error log.

    • Java in the docker images no longer has the cap_net_bind_service capability and thus Humio cannot bind directly to privileged ports when running as a non-root user.

  • Packages

    • Parser installation will now be ignored when installing a package into a system repository.

Fixed in this release

  • UI Changes

    • Fixed an issue where some warnings would show twice.

  • Functions

    • Revised some of the error messages and warnings regarding join() and selfJoin().

  • Other

    • Fix a bug that could cause a NumberFormatException to be thrown from ZooKeeperStatsClient.

    • Fixed an issue where strings like Nana and Information could be interpreted as NaN (not-a-number) and infinity, respectively.

Humio Server 1.49.0 Not Released (2022-07-26)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.49.0Not Released2022-07-26

Internal Only

2023-07-31No1.30.0No

Available for download two days after release.

Not released.

Humio Server 1.48.1 GA (2022-07-19)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.48.1GA2022-07-19

Cloud

2023-08-31No1.30.0No

Available for download two days after release.

Bug fixes and an updated dependency, released to cloud only.

Removed

Items that have been removed as of this release.

Installation and Deployment

  • Remove the following feature flags and their usage: EnterpriseLogin, OidcDynamicIdpProviders, UsagePage, RequestToActivity, CommunityNewDemoData.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • Deprecated enabledFeatures query. Use the new featureFlags query instead.

New features and improvements

  • UI Changes

    • Add missing accessibility features to the login page.

    • The Live checkbox is now no longer checked automatically when changing the value of the time window in the Time Selector. See Changing Time Interval for details.

    • Updated styling on the log in pages.

  • GraphQL API

    • Expose a new GraphQL type with feature flag descriptions and whether they are experimental.

  • Other

    • Include the requester in logs from QuerySessions when a live query is restarted or cancelled.

    • Added detection and handling of all queries being blocked during Humio upgrades.

    • Add flag whether a feature is experimental.

    • All feature flags now contains a textual description about what features are hidden behind the flag.

Fixed in this release

  • UI Changes

    • When viewing the events behind e.g. a Time Chart, the events will now only display with the @timestamp and @rawstring columns.

  • Dashboards and Widgets

    • The theme toggle on a shared dashboard was moved to the header panel and no longer overlaps with any widgets.

  • Other

    • Fixes the placement of a confirmation dialog when attempting to change retention.

    • Fix response entities not being discarded in error cases for the proxyqueryjobs endpoint, which could cause warnings in the log.

Humio Server 1.48.0 Not Released (2022-07-19)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.48.0Not Released2022-07-19

Internal Only

2023-07-31No1.30.0No

Available for download two days after release.

Not released.

Humio Server 1.47.1 GA (2022-07-12)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.47.1GA2022-07-12

Cloud

2023-08-31No1.30.0No

Available for download two days after release.

Bug fixes and an updated dependency, released to cloud only.

New features and improvements

  • Falcon Data Replicator

    • FDR polling is now turned on by default. Whether FDR polling should be turned on or off on a node can be configured using the ENABLE_FDR_POLLING_ON_NODE configuration variable.

  • UI Changes

    • If Humio fails to start because the cluster is being upgraded, a dedicated message will show when launching the UI.

  • GraphQL API

    • Added preview fields isClusterBeingUpdated and minimumNodeVersion to the GraphQL Cluster object type.

    • Added a new dynamic configuration flag QueryResultRowCountLimit that globally limits how many results (events) a query can return. This flag can be set by administrators through GraphQL. See Limits & Standards for more details.

  • Configuration

    • Added a new dynamic configuration GroupDefaultLimit. This can be done through GraphQL. See Limits & Standards for details. If you've changed the value of MAX_STATE_LIMIT, we recommend that you also change GroupDefaultLimit and GroupMaxLimit to the same value for a seamless upgrade, see groupBy() for details.

    • Introduced new dynamic configuration LiveQueryMemoryLimit. It can be set using GraphQL. See Limits & Standards for details.

    • Introduced new dynamic configuration JoinRowLimit. It can be set using GraphQL and can be used as an alternative to the environment variable MAX_JOIN_LIMIT. If the JoinRowLimit is set, then its value will be used instead of MAX_JOIN_LIMIT. If it is not set, then MAX_JOIN_LIMIT will be used.

    • Introduced new dynamic configuration StateRowLimit. It can be set using GraphQL. See Limits & Standards for details.

    • Introduced new dynamic configuration GroupMaxLimit. It can be set using GraphQL. See Limits & Standards for details.

    • Support for KMS on S3 bucket for Bucket Storage. Specify full ARN of the key. The key_id is persisted in the internal BucketEntity so that a later change of the ID of the key to use for uploads will make Humio still refer the old keyID when downloading files uploaded using the previous key. Setting a new value for the target key results in a fresh internal bucket entity to track which files used kms and which did not. For simplicity it is recommended to not mix KMS and non-KMS configurations on the same S3 bucket.

    • New configurations BUCKET_STORAGE_SSE_COMPATIBLE that makes bucket storage not verify checksums of raw objects after uploading to an S3. This option is turned on automatically is KMS is enabled (see S3_STORAGE_KMS_KEY_ARN) but is available directly here for use with other S3 compatible providers where verifying even content length does not work.

      Mini segments usually get merged if their event timestamps span more than MAX_HOURS_SEGMENT_OPEN. Mini segments created as part of backfilling did not follow this rule, but will now get merged if their ingest timestamps span more than MAX_HOURS_SEGMENT_OPEN.

    • Introduced new dynamic configuration QueryMemoryLimit. It can be set using GraphQL. See also LiveQueryMemoryLimit for live queries. For more details, see Limits & Standards.

  • Dashboards and Widgets

    • Applied stylistic changes for the Inspect Panel used in Widget Editor.

    • Table widgets will now break lines for newline characters in columns.

  • Other

    • All users will not have access to the audit log or search all view by default anymore. Access can be granted with permissions.

    • The audit log system repository on Cloud has been replaced with a view, so that dashboards etc. can be created on top of audit log data.

    • Improved performance of validation of keys in tags.

    • The referrer meta tag for Humio has been changed from no-referrer to same-origin.

    • Compute next set of Prometheus metrics only in a single thread concurrently. If more requests arrive, then the next request gets the previous response.

    • Fix a bug that could cause Humio to spuriously log errors warning about segments not being merged for datasources doing backfilling.

Fixed in this release

  • Functions

    • Fixed a bug where using eval as an argument to a function would result in a confusing error message.

  • Other

    • Fix type in Unregisters node text on cluster admin UI.

Humio Server 1.47.0 Not Released (2022-07-12)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.47.0Not Released2022-07-12

Internal Only

2023-07-31No1.30.0No

Available for download two days after release.

Not released.

Humio Server 1.46.0 GA (2022-07-05)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.46.0GA2022-07-05

Cloud

2023-08-31No1.30.0No

Available for download two days after release.

Bug fixes and an updated dependency, released to cloud only.

New features and improvements

  • UI Changes

    • In lists of users, with user avatars containing user initials, the current user would sometimes appear to have an opening parenthesis as their last initial.

    • New styling of errors on search and dashboard pages.

  • GraphQL API

    • Improved error messaging of GraphQL queries and mutations for alerts, scheduled searches and actions in cases where a given repository or view cannot be found.

  • Dashboards and Widgets

    • Added empty states for all widget types that will be rendered when there are no results.

    • Introducing the Heat Map widget that visualizes aggregated data as a colorised grid.

    • Sorting of Pie Chart widget categories, descending by value. Categories grouped as Others will always be last.

    • The widget legend column width is now based on the custom series title (if specified) instead of the original series name.

    • Better handling of dashboard connections issues during restarts and upgrades.

  • Other

    • Fix a bug causing Humio's digest coordinator to allow nodes to take over digest without catching up to the current leader. This could cause the new leader to replay more data from Kafka than necessary.

    • When unregistering a node from a cluster, return a validation error if it is still alive. Hosts should be shut down before attempting to remove them from the cluster. This validation can be skipped using the same accept-data-loss parameter that also disables other validations for the unregistration endpoint.

    • Make a number of improvements to the digest partition coordinator. The coordinator now tries harder to avoid assigning digest to nodes that are not caught up on fetching segments from the other nodes. It also does a better job unassigning digest from dead nodes in edge cases.

    • Remove remains of default groups and roles. The concept was replaced with UserRoles.

Fixed in this release

  • Falcon Data Replicator

    • FDR Ingest will no longer fail on events that are larger than the maximum allowed event size. Instead, such messages will be truncated.

  • UI Changes

    • Intermediate network issues are not reported immediately as an error in the UI.

  • Configuration

    • Fixed a bug that could result in merging small ("undersized") segments even if the resulting segment would then have a wider than desired time span. The goal it to not produce segments that span more than the 10% of the retention setting for time for the repository. If no time-based retention is configured on the repository, then 3 times the value of configuration variable MAX_HOURS_SEGMENT_OPEN is applied as limit. For default settings, that results in 72 hours.

  • Functions

    • Fixed a bug where the writeJson() function would write any field starting with a case-insensitive inf or infinity prefix as a null value in the resulting JSON.

  • Other

    • Kafka upgrades to 3.2.0 in the Docker images and in the Humio dependencies.

Humio Server 1.45.0 GA (2022-06-28)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.45.0GA2022-06-28

Cloud

2023-08-31No1.30.0No

Available for download two days after release.

Bug fixes and an updated dependency, released to cloud only.

New features and improvements

  • Configuration

    • Adds a new metric for measuring the merge latency, which is defined as the latency between the last mini-segment being written in a sequence with the same merge target, and those mini-segments being merged. The metric name is segment-merge-latency-ms.

    • Adds a new logger job that logs the age of an unmerged miniSegment if the age exceeds the threshold set by the env variable MINI_SEGMENT_MAX_MERGE_DELAY_MS_BEFORE_WARNING. The default value of MINI_SEGMENT_MAX_MERGE_DELAY_MS_BEFORE_WARNING is 2 x MAX_HOURS_SEGMENT_OPEN. MAX_HOURS_SEGMENT_OPEN defaults to 24 hours. The error log produced looks like: Oldest unmerged miniSegment is older than the threshold thresholdMs={value} miniSegmentAgeMs={value} segment={value}.

  • Dashboards and Widgets

    • The Bar Chart widget now works with bucket query results.

    • The Pie Chart widget now uses the first column for the series as a fall back option.

    • Note widget:

      • Default background color is now Auto.

      • Introduced the text color configuration option.

  • Other

    • Bump the version of the Monaco code editor.

    • Added a log message with the maximum state size seen by the live part of live queries.

Fixed in this release

  • UI Changes

    • Websocket connections are now kept open when transitioning pages, and are used more efficiently for syntax highlighting.

    • Fix the dropdown menus closing too early on the home page.

  • Dashboards and Widgets

    • The Time Chart widget regression line is no longer affected by the interpolation setting.

  • Other

    • Make streaming queries search segments newest-to-oldest rather than oldest-to-newest. Streaming queries do not ensure the order of exported events anyway, and searching newest-to-oldest is more efficient.

    • Fix a bug causing digesters to continue digesting even if the local disk is full. The digester will now pause digesting and error log if this occurs.

    • Bump woodstox to address SNYK-JAVA-COMFASTERXMLWOODSTOX-2928754.

    • Some errors messages wrongly pointed to the beginning of the query.

Humio Server 1.44.0 GA (2022-06-21)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.44.0GA2022-06-21

Cloud

2023-08-31No1.30.0No

Available for download two days after release.

Bug fixes and an updated dependency, released to cloud only.

Removed

Items that have been removed as of this release.

API

  • The deprecated REST API for actions has been removed, except for the endpoint for testing an action.

New features and improvements

  • UI Changes

    • Improved keyboard accessibility for creating repositories and views.

    • Toggle switches anywhere in the UI can now be accessed using the tab key and can be accessed using the keyboard.

  • Configuration

    • Default value of configuration variable S3_ARCHIVING_WORKERCOUNT raised from 1 to (vCPU/4).

    • Introduced the new dynamic configuration StateRowLimit. It can be set using GraphQL.

    • Introduced new dynamic configuration flag JoinRowLimit. It can be set using GraphQL. The flag can be used as an alternative to the environment variable MAX_JOIN_LIMIT. If the join-row-limit flag is set, then its value will be used instead of MAX_JOIN_LIMIT. If it is not set, then MAX_JOIN_LIMIT will be used.

    • Introduced the new dynamic configuration QueryMemoryLimit. It can be set using GraphQL. The flag replaces the environment variable MAX_MEMORY_FOR_REDUCE, so if you have changed the value of MAX_MEMORY_FOR_REDUCE, please use QueryMemoryLimit now instead. See Limits & Standards for more details.

    • Added a link to humio-activity repository for debugging IDP configurations to the page for setting up the same.

    • Added a new environment variable GROUPBY_DEFAULT_LIMIT which sets the default value for the limit parameter of groupBy(). See groupBy() documentation for details.

  • Dashboards and Widgets

    • Bar Chart widget:

      • The Y-axis can now start at smaller values than 1 for logarithmic scales, when the data contain small enough values.

      • It now has an Auto setting for the Input Data Format property, see Wide or Long Input Format for details.

      • Now works with bucket query results.

    • The dashboards page now displays the current cluster status.

    • The Normalize option for the World Map widget has been replaced by a third magnitude mode named None, which results in fixed size and opacity for all marks.

    • Single Value widget:

      • Missing buckets are now shown as gaps on the sparkline.

      • Isolated data points are now visualized as dots on the sparkline.

  • Log Collector

    • The Log Collector download page has been enabled for on-prem deployments.

  • Other

    • Adds a new metric for the temp disk usage. The metric name is temp-disk-usage-bytes and denotes how many bytes are used.

    • Added a log of the approximate query result size before transmission to the frontend, captured by the approximateResultBeforeSerialization key.

    • Add warning when a multitenancy user is changing data retention on an unlimited repository.

    • Improved performance of NDJSON format in S3 Archiving.

    • Adds a logger job for cluster management stats it log the stats every 2 minutes, which makes them searchable in Humio.

      The logs belong to the class c.h.c.ClusterManagementStatsLoggerJob, logs for all segments contains globalSegmentStats log about singular segments starts with segmentStats.

Fixed in this release

  • Security

    • Update Netty to address CVE-2022-24823.

    • Bump javax.el to address CVE-2021-28170.

  • Other

    • Fix a bug where changing a role for a user under a repository would trigger infinite network requests.

    • If a segment is deleted or otherwise disappears from global while Humio is attempting to upload it to bucket storage, the upload will now be dropped with an info-level log, rather than requeued with an error log.

    • Improve file path handling in DiskSpaceJob to eliminate edge cases where the job might not have been able to tell if a file was on primary or secondary storage.

    • Update org.json:json to address a vulnerability that could cause stack overflows.

    • Fix an issue causing the event forwarding feature to incorrectly reject topic names that contained a dash (-).

    • Fixed a bug where multiline comments weren't always highlighted correctly.

Humio Server 1.42.2 LTS (2022-10-05)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.42.2LTS2022-10-05

Cloud

2023-06-30No1.30.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.42.2/server-1.42.2.tar.gz

These notes include entries from the following previous releases: 1.42.0, 1.42.1

Bug fixes and updates.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The Feature Flag, CookieAuthServerSide, has been deprecated as cookie authentication is now enabled by default. Instead, the configuration field ENABLE_BEARER_TOKEN_AUTHORIZATION has been introduced.

  • The local disk based backup feature described at Making Back-Ups is deprecated, and is planned for removal in September 2022. We have found that restoring backups using this feature is difficult in practice, it is not commonly used, and the backup/restore functionality is covered by the bucket storage feature as well. For these reasons, we are deprecating this feature in favour of bucket storage.

    The DELETE_BACKUP_AFTER_MILLIS configuration parameter, which controls the delay between data being deleted in Humio and removed from backup, will be retained, since it controls a similar delay for bucket storage. Customers using local disk based backups should migrate to using bucket storage instead. Systems not wishing to use a cloud bucket storage solution can keep backup support by instead installing an on-prem S3- or GCS-compatible solution, such as MinIO.

New features and improvements

  • Falcon Data Replicator

    • Added the fdr-message-count metric, which contains the approximate number of messages on an FDR feed's SQS queue.

    • Added the fdr-invisible-message-count metric, which contains the approximate number of invisible messages on an FDR feed's SQS queue.

    • Improved error logging, when an FDR feed fails to download data from an S3 bucket. It now clearly states when a download failed because the S3 bucket is located in a different region than the SQS queue.

  • UI Changes

    • The Format Panel is now available for changing the style of the data displayed in the Event list — see Changing the Data Display.

    • The Save As... button is now always displayed on the Search page.

    • Both the Scatter Chart and the Bar Chart widgets now support automatically adding/toggling axis and legend titles based on the mapped data.

    • The Fields Panel now enables you to fetch fields beyond those from the last 200 events — see Adding and Removing Fields.

  • Configuration

    • Improve the error message if Humio is configured to use bucket storage, but the credentials for the bucket are not configured.

  • Dashboards and Widgets

    • The Single Value widget is now available. Construct a query which returns any single value, or use the timeChart() query function to create a single-value widget instance with sparkline and trend indicators.

    • The Gauge widget is being deprecated in favour of the Single Value widget. Configurations of the former widget are compatible with the latter. This means that persisted configurations of the Gauge widget (url / dashboard widgets / saved queries / recent queries) are still valid, but are visualized using the Single Value widget instead.

  • Log Collector

    • The Humio Log Collector can now be downloaded from the Organizational Settings page, see the Log Collector Documentation for a complete list of the supported logs formats and operating systems.

  • Functions

    • ioc:lookup() would sometimes give incorrect results when negated.

    • worldMap() accepts more magnitude functions, anonymous functions and the percentile() function.

    • worldMap() will warn about licensing issues with IP database.

    • sankey() now accepts more weight functions such as anonymous functions and the percentile() function.

  • Other

    • Fixed an issue where Humio's ZooKeeper monitoring page would show X/0 followers in sync.

    • Fixed an issue that if download of IOCs took more than an hour, Humio would indefinitely start a new download every hour which would eventually fail.

    • Fixed an underlying bug causing addToExistingJob did not find the existing job to be error logged unnecessarily. Humio may decide to fetch a segment from bucket storage for querying. If this decision is made right as the query is cancelled, Humio could log the message above. With this fix, Humio will instead skip downloading the segment, and not log the error.

    • Ensured that errors during view tombstone removal are logged and don't prevent the RetentionJob from performing other cleanup tasks.

    • Email actions can now add the result set as a CSV attachment.

    • When cleaning up a deleted data space, don't error log if two nodes race to delete the data space metadata from global.

    • Logging to the humio-activity repository is now also done for events in sandbox repositories.

    • Specifying a versionless packageId will load the newest version of that package.

    • Fixed an issue where a scheduled search could trigger actions multiple times for the same time period if actions took a long time to finish.

Fixed in this release

  • Security

    • Update Scala to address CVE-2022-36944.

  • Other

    • Compute next set of Prometheus metrics only in a single thread concurrently. If more requests arrive, then the next request gets the previous response.

    • Fix performance issue for users with access to many views.

    • Updated dependencies to woodstox to fix a vulnerability.

Humio Server 1.42.1 LTS (2022-07-18)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.42.1LTS2022-07-18

Cloud

2023-06-30No1.30.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.42.1/server-1.42.1.tar.gz

These notes include entries from the following previous releases: 1.42.0

Bug fixes and an updated dependency.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The Feature Flag, CookieAuthServerSide, has been deprecated as cookie authentication is now enabled by default. Instead, the configuration field ENABLE_BEARER_TOKEN_AUTHORIZATION has been introduced.

  • The local disk based backup feature described at Making Back-Ups is deprecated, and is planned for removal in September 2022. We have found that restoring backups using this feature is difficult in practice, it is not commonly used, and the backup/restore functionality is covered by the bucket storage feature as well. For these reasons, we are deprecating this feature in favour of bucket storage.

    The DELETE_BACKUP_AFTER_MILLIS configuration parameter, which controls the delay between data being deleted in Humio and removed from backup, will be retained, since it controls a similar delay for bucket storage. Customers using local disk based backups should migrate to using bucket storage instead. Systems not wishing to use a cloud bucket storage solution can keep backup support by instead installing an on-prem S3- or GCS-compatible solution, such as MinIO.

New features and improvements

  • Falcon Data Replicator

    • Added the fdr-message-count metric, which contains the approximate number of messages on an FDR feed's SQS queue.

    • Added the fdr-invisible-message-count metric, which contains the approximate number of invisible messages on an FDR feed's SQS queue.

    • Improved error logging, when an FDR feed fails to download data from an S3 bucket. It now clearly states when a download failed because the S3 bucket is located in a different region than the SQS queue.

  • UI Changes

    • The Format Panel is now available for changing the style of the data displayed in the Event list — see Changing the Data Display.

    • The Save As... button is now always displayed on the Search page.

    • Both the Scatter Chart and the Bar Chart widgets now support automatically adding/toggling axis and legend titles based on the mapped data.

    • The Fields Panel now enables you to fetch fields beyond those from the last 200 events — see Adding and Removing Fields.

  • Configuration

    • Improve the error message if Humio is configured to use bucket storage, but the credentials for the bucket are not configured.

  • Dashboards and Widgets

    • The Single Value widget is now available. Construct a query which returns any single value, or use the timeChart() query function to create a single-value widget instance with sparkline and trend indicators.

    • The Gauge widget is being deprecated in favour of the Single Value widget. Configurations of the former widget are compatible with the latter. This means that persisted configurations of the Gauge widget (url / dashboard widgets / saved queries / recent queries) are still valid, but are visualized using the Single Value widget instead.

  • Log Collector

    • The Humio Log Collector can now be downloaded from the Organizational Settings page, see the Log Collector Documentation for a complete list of the supported logs formats and operating systems.

  • Functions

    • ioc:lookup() would sometimes give incorrect results when negated.

    • worldMap() accepts more magnitude functions, anonymous functions and the percentile() function.

    • worldMap() will warn about licensing issues with IP database.

    • sankey() now accepts more weight functions such as anonymous functions and the percentile() function.

  • Other

    • Fixed an issue where Humio's ZooKeeper monitoring page would show X/0 followers in sync.

    • Fixed an issue that if download of IOCs took more than an hour, Humio would indefinitely start a new download every hour which would eventually fail.

    • Fixed an underlying bug causing addToExistingJob did not find the existing job to be error logged unnecessarily. Humio may decide to fetch a segment from bucket storage for querying. If this decision is made right as the query is cancelled, Humio could log the message above. With this fix, Humio will instead skip downloading the segment, and not log the error.

    • Ensured that errors during view tombstone removal are logged and don't prevent the RetentionJob from performing other cleanup tasks.

    • Email actions can now add the result set as a CSV attachment.

    • When cleaning up a deleted data space, don't error log if two nodes race to delete the data space metadata from global.

    • Logging to the humio-activity repository is now also done for events in sandbox repositories.

    • Specifying a versionless packageId will load the newest version of that package.

    • Fixed an issue where a scheduled search could trigger actions multiple times for the same time period if actions took a long time to finish.

Fixed in this release

  • Other

    • Compute next set of Prometheus metrics only in a single thread concurrently. If more requests arrive, then the next request gets the previous response.

    • Updated dependencies to woodstox to fix a vulnerability.

Humio Server 1.42.0 LTS (2022-06-17)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.42.0LTS2022-06-17

Cloud

2023-06-30No1.30.0Yes

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.42.0/server-1.42.0.tar.gz

Bug fixes and updates.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The Feature Flag, CookieAuthServerSide, has been deprecated as cookie authentication is now enabled by default. Instead, the configuration field ENABLE_BEARER_TOKEN_AUTHORIZATION has been introduced.

  • The local disk based backup feature described at Making Back-Ups is deprecated, and is planned for removal in September 2022. We have found that restoring backups using this feature is difficult in practice, it is not commonly used, and the backup/restore functionality is covered by the bucket storage feature as well. For these reasons, we are deprecating this feature in favour of bucket storage.

    The DELETE_BACKUP_AFTER_MILLIS configuration parameter, which controls the delay between data being deleted in Humio and removed from backup, will be retained, since it controls a similar delay for bucket storage. Customers using local disk based backups should migrate to using bucket storage instead. Systems not wishing to use a cloud bucket storage solution can keep backup support by instead installing an on-prem S3- or GCS-compatible solution, such as MinIO.

New features and improvements

  • Falcon Data Replicator

    • Added the fdr-message-count metric, which contains the approximate number of messages on an FDR feed's SQS queue.

    • Added the fdr-invisible-message-count metric, which contains the approximate number of invisible messages on an FDR feed's SQS queue.

    • Improved error logging, when an FDR feed fails to download data from an S3 bucket. It now clearly states when a download failed because the S3 bucket is located in a different region than the SQS queue.

  • UI Changes

    • The Format Panel is now available for changing the style of the data displayed in the Event list — see Changing the Data Display.

    • Both the Scatter Chart and the Bar Chart widgets now support automatically adding/toggling axis and legend titles based on the mapped data.

    • The Fields Panel now enables you to fetch fields beyond those from the last 200 events — see Adding and Removing Fields.

  • Configuration

    • Improve the error message if Humio is configured to use bucket storage, but the credentials for the bucket are not configured.

  • Dashboards and Widgets

    • The Single Value widget is now available. Construct a query which returns any single value, or use the timeChart() query function to create a single-value widget instance with sparkline and trend indicators.

    • The Gauge widget is being deprecated in favour of the Single Value widget. Configurations of the former widget are compatible with the latter. This means that persisted configurations of the Gauge widget (url / dashboard widgets / saved queries / recent queries) are still valid, but are visualized using the Single Value widget instead.

  • Log Collector

    • The Humio Log Collector can now be downloaded from the Organizational Settings page, see the Log Collector Documentation for a complete list of the supported logs formats and operating systems.

  • Functions

    • ioc:lookup() would sometimes give incorrect results when negated.

    • worldMap() accepts more magnitude functions, anonymous functions and the percentile() function.

    • worldMap() will warn about licensing issues with IP database.

    • sankey() now accepts more weight functions such as anonymous functions and the percentile() function.

  • Other

    • Fixed an issue where Humio's ZooKeeper monitoring page would show X/0 followers in sync.

    • Fixed an issue that if download of IOCs took more than an hour, Humio would indefinitely start a new download every hour which would eventually fail.

    • Fixed an underlying bug causing addToExistingJob did not find the existing job to be error logged unnecessarily. Humio may decide to fetch a segment from bucket storage for querying. If this decision is made right as the query is cancelled, Humio could log the message above. With this fix, Humio will instead skip downloading the segment, and not log the error.

    • Ensured that errors during view tombstone removal are logged and don't prevent the RetentionJob from performing other cleanup tasks.

    • Email actions can now add the result set as a CSV attachment.

    • When cleaning up a deleted data space, don't error log if two nodes race to delete the data space metadata from global.

    • Logging to the humio-activity repository is now also done for events in sandbox repositories.

    • Specifying a versionless packageId will load the newest version of that package.

    • Fixed an issue where a scheduled search could trigger actions multiple times for the same time period if actions took a long time to finish.

Humio Server 1.40.0 LTS (2022-05-12)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.40.0LTS2022-05-12

Cloud

2023-05-31No1.30.0Yes

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.40.0/server-1.40.0.tar.gz

1.40 REQUIRES minimum version 1.30.0 of Humio to start. Clusters wishing to upgrade from older versions must upgrade to 1.30.0+ first. After running 1.40.0 or later, you cannot run versions prior to 1.30.0.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Configuration

    • The selfJoin() query function was observed to cause memory problems, so we have set a limit of .0.0 output events (there was previously no bound). This limit can be adjusted with the GraphQL mutation setDynamicConfig with configuration flag SelfJoinLimit. A value of -1 returns selfJoin to its old, unbounded version.

New features and improvements

  • Falcon Data Replicator

    • The static configuration variable ENABLE_FDR_POLLING_ON_NODE is no longer supported, as its functionality has been replaced with the dynamic configurations listed above.

    • Introduced dynamic configuration options for changing FDR polling behaviour at runtime. FDR polling is not enabled by default, so you should take care to set up these new configurations after upgrading, or you will risk that your FDR data isn't ingested into Humio before it is deleted from Falcon.

    • Using the dynamic configuration option FdrEnable, administrators can now turn FDR polling on/off on the entire cluster with a single update. Defaults to false.

    • Using the dynamic configuration option FdrMaxNodes, administrators can put a cap on how many nodes should at most simultaneously poll data from the same FDR feed. Defaults to 5 nodes.

    • Using the dynamic configuration option FdrExcludedNodes, administrators can now exclude specific nodes from polling from FDR. Defaults to the empty list, so all nodes will be used for polling.

    • It is now possible to test an FDR feed in the UI, which will test that Humio can connect to the SQS queue and the S3 bucket.

    • Fixed an issue where exceptions in FDR were not properly logged.

  • UI Changes

    • Introducing the new Scatter Chart widget (previously known as XY):

      • It supports long data format (one field for the series name and one field for the y values) as well as wide format (one field per series value).

      • You can now visualize data in the Scatter Chart when queried with the timeChart(), bucket() and groupBy() functions, as well as the table() function like before.

    • Added style options to either truncate or show full legend labels in widgets.

    • Improvements to the Sankey Diagram widget, it now has multiple style options; show/hide the y-axis, sorting type, label position, and colors plus labels for series.

    • Added support in fieldstats() query function for skipping events. This is used by the UI, but only in situations where we know an approximate result is acceptable and where processing all events would be too costly.

    • Improvements to the Pie Chart widget, it now has a max series setting similar to the Time Chart widget.

    • Syntax highlighting for XML, JSON and accesslog data now uses more distinguishable colors.

    • The @timestamp column is now allowed to be moved amongst the other columns in the event list.

    • When using a widget that is not compatible with the current data, the Reset Widget Type button now works again.

    • The widget dropdown can now be navigated with the keyboard.

    • Events with JSON data can now be collapsed and expanded in the JSON panel.

    • Keep empty lines in queries when exporting assets as templates or to packages.

  • GraphQL API

    • Added two new organization level permissions: DeleteAllRepositories and DeleteAllViews that allow repository and view deletion, respectively, inside an organization.

    • The GraphQL queries and mutations for FDR feeds are no longer in preview.

    • Removed the following deprecated GraphQL fields: UserSettings.settings, UserSettings.isEventListOrderChangedMessageDismissed, and UserSettings.isNewRepoHelpDismissed.

    • Changed permission token related GraphQL endpoints to use enumerations instead of strings.

    • It is now possible to refer a parser by name when creating or updating an ingest listener using the GraphQL API mutations createIngestListenerV3 and updateIngestListenerV3. It is now also possible to change the repository on an ingest listener using updateIngestListenerV3. The old mutations createIngestListenerV2 and updateIngestListenerV2 have been deprecated.

    • Removed the deprecated clientMutationId argument from the GraphQL mutation updateSettings.

    • Marked experimental language features as preview in GraphQL API.

    • Added a GraphQL mutation deleteSearchDomainById that deletes views or repositories by ID.

    • It is now possible to refer a parser by name when creating an ingest token or assigning a parser to an existing ingest token using the GraphQL API mutations addIngestTokenV3 and assignParserToIngestTokenV2. The old mutations addIngestTokenV2 and assignParserToIngestToken have been deprecated.

    • Added a new GraphQL mutation to rename views or repositories by ID.

  • Configuration

    • Added a new config NATIVE_FADVICE_SUPPORT (default true) to allow turning off the use of fadvice internally.

    • Amended how Humio chooses segments to download from bucket storage when prefetching. If S3_STORAGE_PREFERRED_COPY_SOURCE is false, the prefetcher will only download segments that are not already on another host. Otherwise, it will download as many hosts as necessary to follow the configured replication factor. This should help avoid excessive bucket downloads when nodes in the cluster have lots of empty disk space.

    • Validate block CRCs before uploading segment files to bucket storage. Can be disabled by setting VALIDATE_BLOCK_CRCS_BEFORE_UPLOAD to false.

    • Added a new config NATIVE_FALLOCATE_SUPPORT (default true) to allow turning off the use of fallocate and ftruncate internally.

    • Require that {S3/GCS}_STORAGE config must be set before {S3/GCS}_STORAGE_2 is set.

    • Added a new configuration variable BUCKET_STORAGE_TRUST_POLICY for the dual-bucket use case. This setting configures which bucket is considered the "trusted" bucket when two buckets are configured, which impacts when Humio considers data to be safely replicated. Supported values are Primary for trusting the primary bucket, Secondary for trusting the secondary bucket, TrustEither for considering data safely replicated if it is in either bucket, and RequireBoth for considering data safely replicated only if it is in both buckets. This config replaces the BUCKET_STORAGE_2_TRUSTED configuration, true in the old configuration equates to Secondary in the new configuration. The default value of the new configuration is Secondary.

  • Dashboards and Widgets

    • Improvements to the Time Chart widget:

      • It now has an option to show the underlying data points, which makes it possible to inspect the behaviour of the different interpolation methods.

      • Trend lines can now be added in the chart.

    • Introducing the Single Value widget. Construct a query which returns any single value, or use the timeChart() query function to create a single-value widget instance with sparkline and trend indicators.

    • Improvements to the Bar Chart widget:

      • Added style options to name the x and y axis.

      • Added option for interpreting the resulting query data as either wide or long format data.

      • Added option to set a max label length for the x-axis, instead of the bottom padding option. With auto-padding and this style option, it is easier to fit the wanted information in the view.

      • It is now possible to configure bar charts to have a logarithmic y axis.

      • Introduced the stacked bar charts option.

      • It no longer has an artificial minimum height for bars, as this may distort at a glance interpretations of the chart.

      • It no longer has sorting by default, which means that the order will be identical to the query result. You can now sort the x axis of the bar chart by using the sort() query function, if sort by series in the style options is not set.

      • It now has a max series setting similar to the Time Chart widget.

  • Functions

    • The findTimestamp() function now supports date formats like 23FEB2022, that is date, literal month and year without any separators in between. Other formats still require separators between the parts.

  • Other

    • Fixed an ingest bug where, under some circumstances, we would reverse the order of events in a batch.

    • Fixed bugs related to repository deletes.

    • It is now possible to create a view with the same name as a deleted view.

    • Fixed an ingest bug where if multiple types of errors occurred for an event we would only add error fields describing one of them. Now we always report all errors.

    • Added a new system-level permission allowing changing the user name of a user.

    • Fixed an issue where OrganizationStatsUpdaterJob would repeatedly post the error com.humio.entities.organization.OrganizationSingleModeNotSupported: Not supported when using organizations in single mode when the cluster was configured for only one organization.

    • Fixed an issue where query cancellation could in rare cases cause the query scheduler to throw exceptions.

    • Fixed how relative time is displayed.

    • Ingest listeners are now only stopped, not deleted, when a user deletes a repository. If the repository is restored, the ingest listener will be restarted automatically. When it is no longer possible to restore the repository, the ingest listener will be deleted.

    • Added support for restoring deleted repositories and views when using bucket storage. See Delete a Repository or View.

    • Humio is now more strict during a Kafka reset to avoid global desyncs. Only one node will be allowed to boot on the new epoch, remaining nodes won't be allowed to use their snapshots, and will need to fetch a fresh global snapshot from that node.

    • If the query scheduler attempts to read a broken segment file, it may be able to fetch a new copy from bucket storage in some cases. Humio will now only allow this if it can be guaranteed that no events from the broken segment have been added to the query result. Otherwise the query will receive a warning.

    • Fixed an ingest bug where we might discard @timezone and @error fields in events with too many fields. Now we always retain those and only discard other fields.

    • Fixed a bug with UTF-8 serialization of 4-byte codepoints (emojis etc.).

    • When Humio detects multiple datasources for the same set of tags, it will not deduplicate them by selecting one source to keep and marking the others replaced.

    • Added humio-token-hashing.sh to the Humio bin directory. This invokes a utility for generating root tokens.

    • Added more visibility on organization limits when changing the retention settings on a repository.

    • Fixed an issue that links in alerts from OpsGenie actions were not clickable.

    • Added humio-decrypt-bucket-file.sh to the Humio bin directory. This invokes a utility for decrypting files downloaded from bucket storage.

    • Fixed an ingest bug where sometimes we wouldn't turn event fields into tags if we fell back to using the key-value parser. Now we always turn fields into tags.

    • It is no longer possible to create ingest listeners on system repositories using the APIs. Previously, it was only prohibited in the UI.

    • Fixed a caching-related issue with groupBy() in live queries that would briefly cause inconsistent results.

    • Webhook action now includes the 'Message Body Template' for PATCH and DELETE requests as well if it is not empty.

    • Fixed a race condition between nodes creating the merge result for the same target segment, and also transferring it among the nodes concurrently. If a query read the file during that race, an in-memory cache of the file header might hold contents that did not match the local file, resulting in Broken segment warnings in queries.

    • Added a feature that allows deletion of repositories and views on cloud.

    • When calculating the starting offset in Kafka for digest, Humio will now trust that if a segment in global is listed as being in bucket storage, that segment is actually present in bucket storage. Humio no longer double checks by asking bucket storage directly.

    • Fixed an issue where download of IOCs from another node in the cluster could start before the previous download had finished, resulting in too many open connections between nodes in the cluster.

    • Fixed an issue where Filebeat 8.1 would not be compatible unless output.elasticsearch.allow_older_versions was set to true.

    • Renamed the Humio tarball distribution to humio-1.39.0.tar.gz instead of humio-release-1.39.0.tar.gz. The file now contains a directory named humio-1.39.0 instead of humio-release-1.39.0.

    • Updating alert labels using the addAlertLabel and removeAlertLabel mutations now requires the ChangeTriggersAndActions permission.

    • Fixed an issue where the UI would not detect parameters in a query when using saved queries from a package.

    • Made changes to Humio's tracking of bucket storage downloads. This should avoid some rare cases where downloads could get stuck.

    • Reduced the amount of time Humio will spend during shutdown waiting for in-progress data to flush to disk to 60 seconds from 150 seconds.

    • Fixed an issue that could cause creation of two datasources for the same tag set if messages with the same tags happened to arrive on different Kafka partitions.

    • During ingest, if an event has too many fields we now sort the fields lexicographically and remove fields from the end. Before, there was no system to which fields were retained, it was effectively random.

    • Adding and removing queries from the query blocklist is now audit logged as two separate audit log event types, query-blocklist-add and query-blocklist-remove, rather than the single event type blocklist.

    • Improved the phrasing of some error messages.

    • Fixed a bug where accessing a csv file with records spanning multiple lines would fail with an exception.

    • The REST API for ingest listeners has been deprecated.

    • Improved distribution of new autosharded datasources.

    • Fixed an issue where an exception in rare cases could cause ingest requests to fail intermittently.

    • The query scheduler improperly handled regex limits being hit, it should result in a warning on the query. In some cases it was handled by retrying the segment read.

    • Fixed an issue where the set-replication-defaults config endpoint could attempt to assign storage to nodes configured not to store segments.

    • Fixed an issue where some errors showed wrong positions in the search page query field.

    • It is no longer possible to delete a parser that is used by an ingest listener. You must first assign another parser to the ingest listener.

    • Fixed an issue where audit logging of alerts, scheduled searches and actions residing on views would yield incomplete or missing audit logs.

    • Fixed an issue where NetFlow parsing would crash if it received an options data record.

    • It is now validated, that the parser supplied when creating or updating an ingest listener, exists.

    • Fixed an ingest bug where, when truncating an event with too many fields, we wouldn't count error fields, leading to the event still being larger than the maximum size.

    • Fixed an issue where Filebeat 8.0 would not be compatible unless setup.ilm.enabled was set to false.

    • Create, update and delete operations on ingest listeners are now always audit logged. Previously, they were only logged when performed through the REST API. Also, the audit log format has been updated to be similar to the format of other assets. Look for events with the type field set to ingestlistener.create, ingestlistener.update, and ingestlistener.delete.

    • Fixed an issue when using bucket storage alongside secondary storage, where Humio would download files to the secondary storage but register them as present in the primary. It will now download and register them as present on the secondary storage.

    • Fixed duplicate Change triggers and actions entry in view permission token page.

    • Fixed an issue that could cause an exception to be thrown in the ingest code if digest assignment changed while a local segment file being written was still empty.

    • Improved performance of formatting action messages, when the query result for an alert or scheduled search contains large events.

    • Improved distribution onto partitions of tag combinations (datasources) that are affected by auto sharding, resulting in less collisions.

    • Improved the flow of creating a blocked query.

    • Humio will now periodically log node configs to the debug log, in addition to the existing log of config on node boot. These logs will come from com.humio.jobs.ConfigLoggerJob.

    • When shared dashboards are disabled or become inaccessible because of IP filters, they will now be completely unreachable, and any dashboards already open will show an informative error message.

    • It is no longer possible to use experimental functions in Alerts, Parsers, and Event Forwarding. They are now only available on the search page.

    • Webhook action has been updated to only allow the following HTTP verbs: GET, HEAD, POST, PUT, PATCH, DELETE and OPTIONS.

    • Added a feature that allows regular users with delete permissions on cloud to rename views and repositories.

    • Fixed an issue where non-default log formats such as log4j2-json-stdout.xml that logs to STDOUT were not fully in control of their output stream, as log entries of level ERROR were also printed directly to stderr from within the code. The default log4j2 configuration now includes a Console appender that prints errors to stdout, achieving the same result, while allowing the other formats to fully control their output stream.

    • Fixed an issue that could cause the query scheduler to erroneously retry searching a bucketed segment.

    • When logging Kafka consumer and producer metrics, Humio will now log repeated metrics like records-lag-max once per partition, with the partition specified in the partition field.

    • Automatic system removals of queries expired from the blocklist are now audit logged as well.

Humio Server 1.38.2 LTS (2022-06-13)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.38.2LTS2022-06-13

Cloud

2023-03-31No1.26.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.38.2/server-1.38.2.tar.gz

These notes include entries from the following previous releases: 1.38.0, 1.38.1

Updated dependencies with security fixes and weakness.

New features and improvements

  • Falcon Data Replicator

    • Improved performance of FDRJob.

  • UI Changes

    • Minor UX improvements (ie. accessibility) on the queries panel.

    • On the time, bar and pie charts you can hold the ALT/OPTION key to display long legend titles.

    • When changing focus inside a dialog with the keyboard, the focus will no longer move outside the dialog while it is open.

    • Added a quick-fix for unknown escape sequences in the search field.

    • When using the table visualisation in dark mode, empty table cells are now clearly discernible.

    • First row entry in the statistics table on the repo page is now a table header and added hidden content to the empty table header in the new view page.

    • Added a warning for unknown escape sequences in the search field.

    • Hover information in the search field is shown despite an overlapping warning.

    • Reworked the hover message layout and changed the hover information on text (in the search field).

    • Better accessibility for queries panel. You can now tab to focus individual queries, and open a details panel. From here you can also access all actions in the details panel by tabbing.

    • Added a quick-fix to convert non-ASCII quotes to ASCII quotes in the search field.

    • Fixed a bug where the Package Marketplace would redirect to unsupported package versions on older Humio instances.

    • Hover over parameter names and arguments in the search field now includes the default value.

    • The Cluster Nodes table has been redesigned to allow for easier overview and copying the version-number.

    • Fixed an issue where queries with tail() would behave in an unexpected manner when an event is focused.

    • The bar and pie charts now support holding the SHIFT key to display unformatted numeric values.

    • Visually hidden clipboard field is now hidden for assistive technologies/keyboard users.

    • The search page now has focus states on the Language Syntax, Event List Widget and Save As buttons.

    • Pop-ups and drop-downs will now close automatically when focus leaves them.

  • GraphQL API

    • The PERMISSION_MODEL_MODE config option has been removed. All graphql related schema has also been removed.

    • Fixed a bug in the response from calling the installPackageFromZip GraphQL mutation. Previously, the response type exposed a deprecated clientmutationid that could not be selected. Also now if form fields are missing they are properly reported in the response.

    • Deprecates the ReadContents view action, in favor of ReadEvents. This also means ReadEvents has been undeprecated, as we have slightly changed how we consider read rights, and want the action names to match this.

  • Configuration

    • The Property inter.broker.protocol.version in kafka.properties now defaults to 2.4 if not specified. Users upgrading Kafka can either set inter.broker.protocol.version manually in kafka.properties, or pass DEFAULT_INTER_BROKER_PROTOCOL_VERSION as an environment variable to Docker when launching the container. Please follow Kafka's upgrade guidelines when upgrading a Kafka cluster to avoid data loss https://kafka.apache.org/documentation/#upgrade_3_1_0.

    • Reduce default value of INGESTQUEUE_COMPRESSION_LEVEL, the ingest queue compression level from 1 to 0. This reduces time spent compressing before inserting into the ingest queue by roughly 4x at the expense of a 10-20% increase in size required in Kafka for the ingest queue topic.

    • Added new configuration NATIVE_FALLOCATE_SUPPORT (default true) to allow turning off the use of fallocate and ftruncate internally.

    • Added config RDNS_DEFAULT_SERVER for specifying what DNS server is the default for the rdns query function.

    • Added new settings for how uploads to bucket storage are validated. In the case that validation with etags are not available, content length can be used instead.

    • When Kafka topic configuration is managed by Humio (default true) set max.message.bytes on the topics to the value of Config TOPIC_MAX_MESSAGE_BYTES, default is 8388608 (8 MB). Minimum value is 2 MB.

    • Added new configuration NATIVE_FADVICE_SUPPORT (default true) to allow turning off the use of fadvice internally.

    • Added config IP_FILTER_RDNS for specifying what IP addresses can be queried using the rdns query function.

    • Added config IP_FILTER_RDNS_SERVER for specifying what DNS servers can be allowed in the rdns() query function.

    • Added the config CORS_ALLOWED_ORIGINS a comma separated list for CORS allowed origins, default allows all origins.

    • Fixed a bug where TLS_KEYSTORE_TYPE and TLS_TRUSTSTORE_TYPE would only recognize lower-case values.

  • Functions

    • Fixed an issue where tail() could produce results inconsistent with other query functions, when used in a live query.

  • Other

    • Fixed an issue with epoch and offsets not always being stripped from segments.

    • Ensure only a cluster leader that still holds cluster leadership can force digesters to release partition leadership. This could cause spurious reboots in clusters where leadership was under contention.

    • For HTTP Event Collector (HEC) the input field sourcetype is now also stored in @sourcetype.

    • Published new versions of the Humio Kafka Docker containers for Kafka 3.1.0.

    • Added a new system-level permission that allows changing usernames of users.

    • During identity provider configuration, it's possible to fetch SAML configuration from an endpoint.

    • Improved off-heap memory handling. Humio now typically uses only 1 GB on systems with 32 vCPUs, down from typically 16 GB. This leaves more memory for other processes and page cache for data.

    • Fixed a compatibility issue with LogStash 7.16+ and 8.0.0 when using the Elasticsearch output plugin.

    • Improved the performance of deletes from global.

    • Do not run the Global snapshot consistency check on stateless ingest nodes.

    • Fixed an issue where users could be shown in-development feature on the client when running a local installation of Humio.

    • Fixed a bug in the Sankey chart such that it now updates on updated query results.

    • Added tombstoning to uploaded files, which helps with avoiding data loss.

    • Allow cluster managers access to settings for personal sandboxes and to block and kill queries in them.

    • Fixed an issue where top(max) could throw an exception when given values large enough to be represented as positive infinity.

    • Fixed an issue where live queries would sometimes double-count parts of the historic data.

    • Warn at startup if CORES > AvailableProcessorCount as seen by the JVM.

    • Fixed a bug where the Add Column button on the Fields panel would do nothing

    • Fixed an issue where queries of the form #someTagField != someValue ... would sometimes produce incorrect results.

    • Fixed a bug where providing a bad view/repository name when blocking queries would block the query in all views and repositories.

    • Fixed a compatibility issue with FileBeat 8.0.0.

    • Fixed several issues where users could add invalid query filters via the Add filter context button after selecting text in the Event List.

    • Fixed an ingest bug where under some circumstances we would reverse the order of events in a batch.

    • During Digest startup, abort fetching segments from other nodes if the assigned partition set changes while fetching.

    • Fixed an issue where negated functions could lose their negation.

    • Fixed an issue where percentile() would crash on inputs larger than ~1.76e308.

    • Previously a package could be updated with another package with the same name and version, but with different content. This is no longer allowed, and any attempt do so will be rejected and fail.

    • The Kafka client has been upgraded to 3.1.0 from 2.8.1. 3.1.0 enables the idempotent producer by default, which implies acks=all. Clusters that set acknowledgements to a different number via EXTRA_KAFKA_CONFIGS_FILE should update their config to also specify enable.idempotence=false.

    • LSP warnings don't crash queries any more.

    • Ensure a digester can only acquire or release partition leadership if no other digester has leadership of that partition. This could cause spurious reboots if digester leadership became contended.

Fixed in this release

  • Security

    • Updated dependencies to fix vulnerabilities to CVE-2021-22573.

  • Summary

    • Updated java script dependencies to fix vulnerabilities.

    • Updated java script dependencies to fix vulnerabilities.

    • Updated dependencies to Jackson to fix a vulnerability.

  • Other

    • Use latest version of Java 1.13 on Docker image.

    • Use latest version of Alpine on Docker image.

Humio Server 1.38.1 LTS (2022-04-27)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.38.1LTS2022-04-27

Cloud

2023-03-31No1.26.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.38.1/server-1.38.1.tar.gz

These notes include entries from the following previous releases: 1.38.0

Updated dependencies with security fixes and weakness.

New features and improvements

  • Falcon Data Replicator

    • Improved performance of FDRJob.

  • UI Changes

    • Minor UX improvements (ie. accessibility) on the queries panel.

    • On the time, bar and pie charts you can hold the ALT/OPTION key to display long legend titles.

    • When changing focus inside a dialog with the keyboard, the focus will no longer move outside the dialog while it is open.

    • Added a quick-fix for unknown escape sequences in the search field.

    • When using the table visualisation in dark mode, empty table cells are now clearly discernible.

    • First row entry in the statistics table on the repo page is now a table header and added hidden content to the empty table header in the new view page.

    • Added a warning for unknown escape sequences in the search field.

    • Hover information in the search field is shown despite an overlapping warning.

    • Reworked the hover message layout and changed the hover information on text (in the search field).

    • Better accessibility for queries panel. You can now tab to focus individual queries, and open a details panel. From here you can also access all actions in the details panel by tabbing.

    • Added a quick-fix to convert non-ASCII quotes to ASCII quotes in the search field.

    • Fixed a bug where the Package Marketplace would redirect to unsupported package versions on older Humio instances.

    • Hover over parameter names and arguments in the search field now includes the default value.

    • The Cluster Nodes table has been redesigned to allow for easier overview and copying the version-number.

    • Fixed an issue where queries with tail() would behave in an unexpected manner when an event is focused.

    • The bar and pie charts now support holding the SHIFT key to display unformatted numeric values.

    • Visually hidden clipboard field is now hidden for assistive technologies/keyboard users.

    • The search page now has focus states on the Language Syntax, Event List Widget and Save As buttons.

    • Pop-ups and drop-downs will now close automatically when focus leaves them.

  • GraphQL API

    • The PERMISSION_MODEL_MODE config option has been removed. All graphql related schema has also been removed.

    • Fixed a bug in the response from calling the installPackageFromZip GraphQL mutation. Previously, the response type exposed a deprecated clientmutationid that could not be selected. Also now if form fields are missing they are properly reported in the response.

    • Deprecates the ReadContents view action, in favor of ReadEvents. This also means ReadEvents has been undeprecated, as we have slightly changed how we consider read rights, and want the action names to match this.

  • Configuration

    • The Property inter.broker.protocol.version in kafka.properties now defaults to 2.4 if not specified. Users upgrading Kafka can either set inter.broker.protocol.version manually in kafka.properties, or pass DEFAULT_INTER_BROKER_PROTOCOL_VERSION as an environment variable to Docker when launching the container. Please follow Kafka's upgrade guidelines when upgrading a Kafka cluster to avoid data loss https://kafka.apache.org/documentation/#upgrade_3_1_0.

    • Reduce default value of INGESTQUEUE_COMPRESSION_LEVEL, the ingest queue compression level from 1 to 0. This reduces time spent compressing before inserting into the ingest queue by roughly 4x at the expense of a 10-20% increase in size required in Kafka for the ingest queue topic.

    • Added new configuration NATIVE_FALLOCATE_SUPPORT (default true) to allow turning off the use of fallocate and ftruncate internally.

    • Added config RDNS_DEFAULT_SERVER for specifying what DNS server is the default for the rdns query function.

    • Added new settings for how uploads to bucket storage are validated. In the case that validation with etags are not available, content length can be used instead.

    • When Kafka topic configuration is managed by Humio (default true) set max.message.bytes on the topics to the value of Config TOPIC_MAX_MESSAGE_BYTES, default is 8388608 (8 MB). Minimum value is 2 MB.

    • Added new configuration NATIVE_FADVICE_SUPPORT (default true) to allow turning off the use of fadvice internally.

    • Added config IP_FILTER_RDNS for specifying what IP addresses can be queried using the rdns query function.

    • Added config IP_FILTER_RDNS_SERVER for specifying what DNS servers can be allowed in the rdns() query function.

    • Added the config CORS_ALLOWED_ORIGINS a comma separated list for CORS allowed origins, default allows all origins.

    • Fixed a bug where TLS_KEYSTORE_TYPE and TLS_TRUSTSTORE_TYPE would only recognize lower-case values.

  • Functions

    • Fixed an issue where tail() could produce results inconsistent with other query functions, when used in a live query.

  • Other

    • Fixed an issue with epoch and offsets not always being stripped from segments.

    • Ensure only a cluster leader that still holds cluster leadership can force digesters to release partition leadership. This could cause spurious reboots in clusters where leadership was under contention.

    • For HTTP Event Collector (HEC) the input field sourcetype is now also stored in @sourcetype.

    • Published new versions of the Humio Kafka Docker containers for Kafka 3.1.0.

    • Added a new system-level permission that allows changing usernames of users.

    • During identity provider configuration, it's possible to fetch SAML configuration from an endpoint.

    • Improved off-heap memory handling. Humio now typically uses only 1 GB on systems with 32 vCPUs, down from typically 16 GB. This leaves more memory for other processes and page cache for data.

    • Fixed a compatibility issue with LogStash 7.16+ and 8.0.0 when using the Elasticsearch output plugin.

    • Improved the performance of deletes from global.

    • Do not run the Global snapshot consistency check on stateless ingest nodes.

    • Fixed an issue where users could be shown in-development feature on the client when running a local installation of Humio.

    • Fixed a bug in the Sankey chart such that it now updates on updated query results.

    • Added tombstoning to uploaded files, which helps with avoiding data loss.

    • Allow cluster managers access to settings for personal sandboxes and to block and kill queries in them.

    • Fixed an issue where top(max) could throw an exception when given values large enough to be represented as positive infinity.

    • Fixed an issue where live queries would sometimes double-count parts of the historic data.

    • Warn at startup if CORES > AvailableProcessorCount as seen by the JVM.

    • Fixed a bug where the Add Column button on the Fields panel would do nothing

    • Fixed an issue where queries of the form #someTagField != someValue ... would sometimes produce incorrect results.

    • Fixed a bug where providing a bad view/repository name when blocking queries would block the query in all views and repositories.

    • Fixed a compatibility issue with FileBeat 8.0.0.

    • Fixed several issues where users could add invalid query filters via the Add filter context button after selecting text in the Event List.

    • Fixed an ingest bug where under some circumstances we would reverse the order of events in a batch.

    • During Digest startup, abort fetching segments from other nodes if the assigned partition set changes while fetching.

    • Fixed an issue where negated functions could lose their negation.

    • Fixed an issue where percentile() would crash on inputs larger than ~1.76e308.

    • Previously a package could be updated with another package with the same name and version, but with different content. This is no longer allowed, and any attempt do so will be rejected and fail.

    • The Kafka client has been upgraded to 3.1.0 from 2.8.1. 3.1.0 enables the idempotent producer by default, which implies acks=all. Clusters that set acknowledgements to a different number via EXTRA_KAFKA_CONFIGS_FILE should update their config to also specify enable.idempotence=false.

    • LSP warnings don't crash queries any more.

    • Ensure a digester can only acquire or release partition leadership if no other digester has leadership of that partition. This could cause spurious reboots if digester leadership became contended.

Fixed in this release

  • Summary

    • Updated java script dependencies to fix vulnerabilities.

    • Updated dependencies to Jackson to fix a vulnerability.

  • Other

    • Use latest version of Java 1.13 on Docker image.

    • Use latest version of Alpine on Docker image.

Humio Server 1.38.0 LTS (2022-03-15)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.38.0LTS2022-03-15

Cloud

2023-03-31No1.26.0Yes

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.38.0/server-1.38.0.tar.gz

Humio can now poll and ingest data from the Falcon platform's Falcon Data Replicator(FDR) service. This feature can be used as an alternative to the standalone fdr2humio project. See the Ingesting FDR Data into a Repository for more information.

New features and improvements

  • Falcon Data Replicator

    • Improved performance of FDRJob.

  • UI Changes

    • Minor UX improvements (ie. accessibility) on the queries panel.

    • On the time, bar and pie charts you can hold the ALT/OPTION key to display long legend titles.

    • When changing focus inside a dialog with the keyboard, the focus will no longer move outside the dialog while it is open.

    • Added a quick-fix for unknown escape sequences in the search field.

    • When using the table visualisation in dark mode, empty table cells are now clearly discernible.

    • First row entry in the statistics table on the repo page is now a table header and added hidden content to the empty table header in the new view page.

    • Added a warning for unknown escape sequences in the search field.

    • Hover information in the search field is shown despite an overlapping warning.

    • Reworked the hover message layout and changed the hover information on text (in the search field).

    • Better accessibility for queries panel. You can now tab to focus individual queries, and open a details panel. From here you can also access all actions in the details panel by tabbing.

    • Added a quick-fix to convert non-ASCII quotes to ASCII quotes in the search field.

    • Fixed a bug where the Package Marketplace would redirect to unsupported package versions on older Humio instances.

    • Hover over parameter names and arguments in the search field now includes the default value.

    • The Cluster Nodes table has been redesigned to allow for easier overview and copying the version-number.

    • Fixed an issue where queries with tail() would behave in an unexpected manner when an event is focused.

    • The bar and pie charts now support holding the SHIFT key to display unformatted numeric values.

    • Visually hidden clipboard field is now hidden for assistive technologies/keyboard users.

    • The search page now has focus states on the Language Syntax, Event List Widget and Save As buttons.

    • Pop-ups and drop-downs will now close automatically when focus leaves them.

  • GraphQL API

    • The PERMISSION_MODEL_MODE config option has been removed. All graphql related schema has also been removed.

    • Fixed a bug in the response from calling the installPackageFromZip GraphQL mutation. Previously, the response type exposed a deprecated clientmutationid that could not be selected. Also now if form fields are missing they are properly reported in the response.

    • Deprecates the ReadContents view action, in favor of ReadEvents. This also means ReadEvents has been undeprecated, as we have slightly changed how we consider read rights, and want the action names to match this.

  • Configuration

    • The Property inter.broker.protocol.version in kafka.properties now defaults to 2.4 if not specified. Users upgrading Kafka can either set inter.broker.protocol.version manually in kafka.properties, or pass DEFAULT_INTER_BROKER_PROTOCOL_VERSION as an environment variable to Docker when launching the container. Please follow Kafka's upgrade guidelines when upgrading a Kafka cluster to avoid data loss https://kafka.apache.org/documentation/#upgrade_3_1_0.

    • Reduce default value of INGESTQUEUE_COMPRESSION_LEVEL, the ingest queue compression level from 1 to 0. This reduces time spent compressing before inserting into the ingest queue by roughly 4x at the expense of a 10-20% increase in size required in Kafka for the ingest queue topic.

    • Added new configuration NATIVE_FALLOCATE_SUPPORT (default true) to allow turning off the use of fallocate and ftruncate internally.

    • Added config RDNS_DEFAULT_SERVER for specifying what DNS server is the default for the rdns query function.

    • Added new settings for how uploads to bucket storage are validated. In the case that validation with etags are not available, content length can be used instead.

    • When Kafka topic configuration is managed by Humio (default true) set max.message.bytes on the topics to the value of Config TOPIC_MAX_MESSAGE_BYTES, default is 8388608 (8 MB). Minimum value is 2 MB.

    • Added new configuration NATIVE_FADVICE_SUPPORT (default true) to allow turning off the use of fadvice internally.

    • Added config IP_FILTER_RDNS for specifying what IP addresses can be queried using the rdns query function.

    • Added config IP_FILTER_RDNS_SERVER for specifying what DNS servers can be allowed in the rdns() query function.

    • Added the config CORS_ALLOWED_ORIGINS a comma separated list for CORS allowed origins, default allows all origins.

    • Fixed a bug where TLS_KEYSTORE_TYPE and TLS_TRUSTSTORE_TYPE would only recognize lower-case values.

  • Functions

    • Fixed an issue where tail() could produce results inconsistent with other query functions, when used in a live query.

  • Other

    • Fixed an issue with epoch and offsets not always being stripped from segments.

    • Ensure only a cluster leader that still holds cluster leadership can force digesters to release partition leadership. This could cause spurious reboots in clusters where leadership was under contention.

    • For HTTP Event Collector (HEC) the input field sourcetype is now also stored in @sourcetype.

    • Published new versions of the Humio Kafka Docker containers for Kafka 3.1.0.

    • Added a new system-level permission that allows changing usernames of users.

    • During identity provider configuration, it's possible to fetch SAML configuration from an endpoint.

    • Improved off-heap memory handling. Humio now typically uses only 1 GB on systems with 32 vCPUs, down from typically 16 GB. This leaves more memory for other processes and page cache for data.

    • Fixed a compatibility issue with LogStash 7.16+ and 8.0.0 when using the Elasticsearch output plugin.

    • Improved the performance of deletes from global.

    • Do not run the Global snapshot consistency check on stateless ingest nodes.

    • Fixed an issue where users could be shown in-development feature on the client when running a local installation of Humio.

    • Fixed a bug in the Sankey chart such that it now updates on updated query results.

    • Added tombstoning to uploaded files, which helps with avoiding data loss.

    • Allow cluster managers access to settings for personal sandboxes and to block and kill queries in them.

    • Fixed an issue where top(max) could throw an exception when given values large enough to be represented as positive infinity.

    • Fixed an issue where live queries would sometimes double-count parts of the historic data.

    • Warn at startup if CORES > AvailableProcessorCount as seen by the JVM.

    • Fixed a bug where the Add Column button on the Fields panel would do nothing

    • Fixed an issue where queries of the form #someTagField != someValue ... would sometimes produce incorrect results.

    • Fixed a bug where providing a bad view/repository name when blocking queries would block the query in all views and repositories.

    • Fixed a compatibility issue with FileBeat 8.0.0.

    • Fixed several issues where users could add invalid query filters via the Add filter context button after selecting text in the Event List.

    • Fixed an ingest bug where under some circumstances we would reverse the order of events in a batch.

    • During Digest startup, abort fetching segments from other nodes if the assigned partition set changes while fetching.

    • Fixed an issue where negated functions could lose their negation.

    • Fixed an issue where percentile() would crash on inputs larger than ~1.76e308.

    • Previously a package could be updated with another package with the same name and version, but with different content. This is no longer allowed, and any attempt do so will be rejected and fail.

    • The Kafka client has been upgraded to 3.1.0 from 2.8.1. 3.1.0 enables the idempotent producer by default, which implies acks=all. Clusters that set acknowledgements to a different number via EXTRA_KAFKA_CONFIGS_FILE should update their config to also specify enable.idempotence=false.

    • LSP warnings don't crash queries any more.

    • Ensure a digester can only acquire or release partition leadership if no other digester has leadership of that partition. This could cause spurious reboots if digester leadership became contended.

Humio Server 1.37.1 GA (2022-02-25)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.37.1GA2022-02-25

Cloud

2023-03-31No1.26.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Minor fixes and improvements.

New features and improvements

  • Falcon Data Replicator

    • Improved performance of FDRJob.

  • Other

    • Added a new system-level permission that allows changing usernames of users.

    • Improved off-heap memory handling. Humio now typically uses only 1 GB on systems with 32 vCPUs, down from typically 16 GB. This leaves more memory for other processes and page cache for data.

Fixed in this release

  • Other

    • Fixed an issue where users could be shown in-development feature on the client when running a local installation of Humio.

    • Fixed an issue where QueryFunctionValidator failed giving the error scala.MatchError.

    • Fixed an issue where some queries using regex would use an unbounded regex engine.

Humio Server 1.37.0 GA (2022-02-14)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.37.0GA2022-02-14

Cloud

2023-03-31No1.26.0Yes

Available for download two days after release.

Hide file hashes

Show file hashes

Humio can now poll and ingest data from the Falcon platform's Falcon Data Replicator (FDR) service. This feature can be used as an alternative to the standalone fdr2humio project. See the Ingesting FDR Data into a Repository for more information.

New features and improvements

  • UI Changes

    • Reworked the hover message layout and changed the hover information on text (in the search field).

    • Hover over parameter names and arguments in the search field now includes the default value.

    • On the time, bar and pie charts you can hold the ALT/OPTION key to display long legend titles.

    • Added a quick-fix for unknown escape sequences in the search field.

    • The bar and pie charts now support holding the SHIFT key to display unformatted numeric values.

    • First row entry in the statistics table on the repo page is now a table header and added hidden content to the empty table header in the new view page.

    • The Cluster Nodes table has been redesigned to allow for easier overview and copying the version-number.

    • The search page now has focus states on the Language Syntax, Event List Widget and Save As buttons.

    • When using the table visualisation in dark mode, empty table cells are now clearly discernible.

    • Better accessibility for queries panel. You can now tab to focus individual queries, and open a details panel. From here you can also access all actions in the details panel by tabbing.

    • Visually hidden clipboard field is now hidden for assistive technologies/keyboard users.

    • Added a warning for unknown escape sequences in the search field.

    • Minor UX improvements (ie. accessibility) on the queries panel.

    • Added a quick-fix to convert non-ASCII quotes to ASCII quotes in the search field.

    • Hover information in the search field is shown despite an overlapping warning.

    • Pop-ups and drop-downs will now close automatically when focus leaves them.

    • When changing focus inside a dialog with the keyboard, the focus will no longer move outside the dialog while it is open.

  • GraphQL API

    • Deprecates the ReadContents view action, in favor of ReadEvents. This also means ReadEvents has been undeprecated, as we have slightly changed how we consider read rights, and want the action names to match this.

    • Fixed a bug in the response from calling the installPackageFromZip GraphQL mutation. Previously, the response type exposed a deprecated clientmutationid that could not be selected. Also now if form fields are missing they are properly reported in the response.

  • Configuration

    • Fixed a bug where TLS_KEYSTORE_TYPE and TLS_TRUSTSTORE_TYPE would only recognize lower-case values.

    • Added config RDNS_DEFAULT_SERVER for specifying what DNS server is the default for the rdns() query function.

    • Added config IP_FILTER_RDNS for specifying what IP addresses can be queried using the rdns() query function.

    • Added new settings for how uploads to bucket storage are validated. In the case that validation with etags are not available, content length can be used instead.

    • Added config IP_FILTER_RDNS_SERVER for specifying what DNS servers can be allowed in the rdns() query function.

    • Reduce default value of INGESTQUEUE_COMPRESSION_LEVEL, the ingest queue compression level from 1 to 0. This reduces time spent compressing before inserting into the ingest queue by roughly 4x at the expense of a 10-20% increase in size required in Kafka for the ingest queue topic.

    • The PERMISSION_MODEL_MODE configuration option has been removed. All graphql related schema has also been removed.

    • The Property inter.broker.protocol.version in kafka.properties now defaults to 2.4 if not specified. Users upgrading Kafka can either set inter.broker.protocol.version manually in kafka.properties, or pass DEFAULT_INTER_BROKER_PROTOCOL_VERSION as an environment variable to Docker when launching the container. Please follow Kafka's upgrade guidelines when upgrading a Kafka cluster to avoid data loss https://kafka.apache.org/documentation/#upgrade_3_1_0.

    • When Kafka topic configuration is managed by Humio (default true) set max.message.bytes on the topics to the value of Config TOPIC_MAX_MESSAGE_BYTES, default is 8388608 (8 MB). Minimum value is 2 MB.

    • Added the config CORS_ALLOWED_ORIGINS a comma separated list for CORS allowed origins, default allows all origins.

  • Other

    • Improve the performance of deletes from global.

    • Published new versions of the Humio Kafka Docker containers for Kafka 3.1.0.

    • Ensure only a cluster leader that still holds cluster leadership can force digesters to release partition leadership. This could cause spurious reboots in clusters where leadership was under contention.

    • Allow cluster managers access to settings for personal sandboxes and to block and kill queries in them.

    • Added tombstoning to uploaded files, which helps with avoiding data loss.

    • Do not run the Global snapshot consistency check on stateless ingest nodes.

    • The Kafka client has been upgraded to 3.1.0 from 2.8.1. 3.1.0 enables the idempotent producer by default, which implies acks=all. Clusters that set acks to a different number via EXTRA_KAFKA_CONFIGS_FILE should update their config to also specify enable.idempotence=false

    • During Digest startup, abort fetching segments from other nodes if the assigned partition set changes while fetching.

    • Ensure a digester can only acquire or release partition leadership if no other digester has leadership of that partition. This could cause spurious reboots if digester leadership became contended.

    • During identity provider configuration, it's possible to fetch SAML configuration from an endpoint.

Fixed in this release

  • UI Changes

    • Fixed an issue where live queries would sometimes double-count parts of the historic data.

    • Fixed a bug where the Add Column button on the Fields panel would do nothing

    • Fixed a bug where the Package Marketplace would redirect to unsupported package versions on older Humio instances.

    • Previously a package could be updated with another package with the same name and version, but with different content. This is no longer allowed, and any attempt do so will be rejected and fail.

    • Fixed a compatibility issue with FileBeat 8.0.0.

    • Fixed several issues where users could add invalid query filters via the Add filter context button after selecting text in the Event List.

    • For HTTP Event Collector (HEC) the input field sourcetype is now also stored in @sourcetype.

    • Fixed an issue where tail() could produce results inconsistent with other query functions, when used in a live query.

    • Fixes an issue with epoch and offsets not always being stripped from segments.

    • LSP warnings don't crash queries any more.

    • Fixed an issue where queries of the form #someTagField != someValue ... would sometimes produce incorrect results.

    • Fixed an issue where negated functions could lose their negation.

    • Fixed an issue where top(max) could throw an exception when given values large enough to be represented as positive infinity.

    • Fixed an issue where queries with tail() would behave in an unexpected manner when an event is focused.

    • Fixed a bug where providing a bad view/repository name when blocking queries would block the query in all views and repositories.

    • Fixed a bug in the Sankey chart such that it now updates on updated query results.

    • Fixed a compatibility issue with LogStash 7.16+ and 8.0.0 when using the Elasticsearch output plugin.

    • Fixed an issue where percentile() would crash on inputs larger than ~1.76e308.

    • Warn at startup if CORES > AvailableProcessorCount as seen by the JVM.

Humio Server 1.36.4 LTS (2022-06-13)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.36.4LTS2022-06-13

Cloud

2023-01-31No1.26.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.36.4/server-1.36.4.tar.gz

These notes include entries from the following previous releases: 1.36.0, 1.36.1, 1.36.2, 1.36.3

Updated dependencies with security fixes and weakness.

New features and improvements

  • UI Changes

    • New feature to select text in the search page event list and include/exclude that in the search query.

    • Improved dark mode toggle button's accessibility.

    • Disable the option to creating a view if the user does not have Connect a view permission on any repository. This is more intuitive than getting an empty dropdown of repositories to choose from.

    • Improved accessibility when choosing a theme.

    • Allow more dialogs in the UI to be closed with the Esc key.

    • Added ability to resize search page query field by dragging or fitting to query.

    • Time Selector is now accessible by keyboard.

    • Hovering over text within a query now shows the result of interpreting escape characters.

    • New dialogs for creation of parsers and dashboards.

  • GraphQL API

    • Improved the error messages when the GraphQL queries SearchDomain.alert, SearchDomain.action, and SearchDomain.savedQuery do not find the entity with the given ID.

  • Configuration

    • Added the config CORS_ALLOWED_ORIGINS a comma separated list for CORS allowed origins, default allows all origins.

    • Added INITIAL_FEATURE_FLAGS which lets you enable/disable feature flags on startup. For instance, setting

      INITIAL_FEATURE_FLAGS=+UserRoles,-UsagePage

      Enables UserRoles and disables UsagePage.

    • Make ZOOKEEPER_URL optional. When not set, the zookeeper-status-logger job does not run, and the cluster administration page does not display information about a ZooKeeper cluster.

    • New configuration BUCKET_STORAGE_MULTIPLE_ENDPOINTS and many configurations using STORAGE_2 as prefix. See Bucket Storage.

    • When using ZOOKEEPER_URL_FOR_NODE_UUID for assignment of node ID to Humio nodes, and value of ZOOKEEPER_PREFIX_FOR_NODE_UUID (default /humio_autouuid) does not match contents of local UUID file, acquire a fresh node uuid.

  • Functions

    • Added job which will periodically run a query and record how long it took. By default the query is count().

    • Added a limit parameter to the fieldstats() function. This parameter limits the number of fields to include in the result.

  • Other

    • Added option to specify an IP Filter for which addresses hostname verification should not be made.

    • Added granular IP Filter support for shared dashboards (BETA - API only).

    • Added analytics on query language feature use to the audit-log under the fields queryParserMetrics.

    • Allow the query scheduler to enqueue segments and aux files for download from bucket storage more regularly. This should ensure that queries fetching small aux files can more reliably keep the download job busy.

    • Remove caching of API calls to prevent caching of potential sensitive data.

    • Added warning logs when errors are rendered to browser during OAuth flows.

    • Added exceptions to the Humio logs from AlertJob and ScheduledSearchJob.

    • Added ability to override max auto shard count for a specific repository.

    • Improved the default permissions on the group page by leaving their view expanded once the user cancels update.

    • Allow the same view name across organizations.

    • Improved caching of UI static assets.

    • Improved the error message when an ingest request times out.

    • Added a job that scans segments which are waiting to be archived, this value is recorded in the metric:s3-archiving-latency-max.

    • Improved Humio's detection of Kafka resets. We now load the Kafka cluster id once on boot. If it changes after that, the node will crash.

    • Improved usability of the groups page.

Fixed in this release

  • Security

    • Updated java script dependencies to fix vulnerabilities to CVE-2021-22573.

  • Summary

    • Use latest version of Java 1.13 on Docker image.

    • Use latest version of Alpine on Docker image.

    • Reading hashfilter in chunks to avoid having huge off heap buffers.

    • Updated dependencies to Jackson to fix a vulnerability.

    • Performance improvements of IngestPartitionCoordinator.

    • Updated java script dependencies to fix vulnerabilities.

    • Improve the performance of deletes from global.

    • Improved off-heap memory handling. Humio now typically uses only 1 GB on systems with 32 vCPUs, down from typically 16 GB. This leaves more memory for other processes and page cache for data.

    • Downgrade to Java 1.13 on Docker image to fix rare cases of JVM crashes.

  • UI Changes

    • For HTTP Event Collector (HEC) the input field sourcetype is now also stored in @sourcetype.

    • Remove script-src: unsafe-eval from content security policy.

    • Removed a spurious warning log when requesting a non-existent hash file from S3.

    • The action message templates {events_str} and {query_result_summary} always evaluate to the same string. To reflect this, the UI has been updated so that these templates are combined into the same item in the template overview for Email, Slack and Webhook actions.

    • Fixed an issue where the SegmentMoverJob could delete the local copy of a segment, if a pending download of the segment failed the CRC check. The job will now keep the downloaded file at a temporary path until the CRC check completes, to avoid deleting a local copy created by other jobs, e.g. by bucket downloads.

    • The query endpoint API now supports languageVersion for specifying Humio query language versions.

    • Fixed a compatibility issue with Filebeat 7.16.0.

    • Make writes to Kafka's chatter topic block in a similar manner as writes to global.

    • Fixed an issue where top would fail if the sum of the values exceeded 2^63-1. Exceeding sums are now pegged to 2^63-1.

    • When bootstrapping a new cluster, set the cluster version in global right away. Since nodes will not boot on a snapshot that doesn't specify a cluster version, it is important that this field exists in all snapshots.

    • Reenable a feature to make Humio delete local copies of bucketed segments, even if they are involved in a query.

    • Fixed an issue where repeating queries could cause other queries to fail.

    • Fixed an issue in the Table widget. It will no longer insert 0-values for missing fields in integer columns. Empty fields will be shown consistently, independent of the column data type.

    • The /hec endpoint no longer responds to OPTIONS requests saying it supports GET requests. It doesn't and never has.

    • Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.

    • Make Humio handle missing aux files a little faster when downloading segments from bucket storage.

    • Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.

    • Fixed an issue in the Export to file dialog on the search page. It is now possible to export fields with spaces.

    • The repository/.../query endpoint now returns a status code of .0 (BadRequest) when given an invalid query in some cases where previously it returned 503 (ServiceUnavailable).

    • Fixed an issue where the Humio query URLs sent by actions would land users on the search page in editing mode for the alert or scheduled search that had triggered. Now, they still land on the search page, but not in editing mode.

    • Fixed a race condition that could cause digesters to calculate two different offsets during startup when determining where to start consuming, and which partially written segments to discard, which could lead to data loss when partially written segments were replayed from Kafka.

    • Queries on views no longer restart when the ordering of the view's connections is changed.

    • Fixed an issue where queries of the form #someTagField != someValue ... would sometimes produce incorrect results.

    • Code completion in the query editor now also works on the right hand side of :=.

    • Fixed an issue where MaxMind databases would only update if a license was present at startup and not if it was added later.

    • Fixed session() such that it works when events arrive out of time order.

    • Fixed an issue that repeatedly tried to restart live queries from a given user upon the deletion of the user.

    • Fixed an issue where live queries would sometimes double-count parts of the historic data.

    • When interacting with the REST API for files, errors now have detailed error messages.

    • Fixed an issue where, if a custom parser was overriding a built-in parser, then the custom parser could accidentally be overwritten by creating a new parser with the same name.

    • From the alerts overview and the scheduled searches overview, it is now possible to clear the error status on an alert or a scheduled search.

    • Errors on alerts are now cleared more granularly. Errors when starting the alert query are cleared as soon as the query is successfully started, errors from polling the query are cleared when the query is successfully polled, and errors from invoking actions are cleared when at least one action has been successfully triggered.

    • Reduce noise in the log when the bucket storage upload job attempts to upload a file that is deleted concurrently.

    • Errors on scheduled searches are now cleared more granularly. Errors when starting a query are cleared as soon as another query is successfully started, errors from polling a query are cleared when a query is successfully polled, and errors from invoking actions are cleared when at least one action has been successfully triggered.

    • No longer allow organization- and system-level ingest tokens to ingest into sandbox and system repos.

    • Reenable a feature to make Humio fetch and check hash files from bucket storage before fetching the segments.

    • No longer allow requests to /hec to specify organizations by name. We now only accept IDs.

    • SAML and OIDC only - During signout, Humio background tabs will be redirected to a signout landing page instead of to the login page.

    • Humio now tries to avoid interrupting threads during shutdown, instead allowing them to finish their work. This should reduce log noise when shutting down.

    • The AlertJob and ScheduledSearchJob now only log validation errors from running the queries as warnings, previously, some of these were logged as errors.

    • Fixed an issue where nodes could request partitions from the query partitioning table that were not present.

    • When starting ingest, Humio checks that the computed starting position in Kafka is below the Kafka end offset. Ensure that the end offset is requested after the starting position is computed, not before. This might prevent a very rare spurious boot failure.

    • Fixed Humio always reading and discarding an already processed message from the ingest queue on boot.

    • Fixed a number of instability issues in the query scheduler. The scheduler should now more reliably ensure that each query either completes, or is cancelled.

    • Bumped the Humio Docker containers to Java 17. If you manually set any --add-opens flags in your JVM config, you should remove them. The container should set the right flags automatically.

    • Fixed an issue where the digest coordinator could consider a host to be alive if the coordinator hadn't seen any timestamps from that host.

    • When creating ingest and chatter topic, reduce desired max.message.bytes to what Kafka cluster allows, if that is lower than our desired values.

  • Queries

    • Query partition tables updates are now rejected if written by a node that is no longer the cluster leader.

  • Other

    • Fixed a race condition between nodes creating the merge result for the same target segment, and also transferring it among the nodes concurrently. If a query read the file during that race condition, an in-memory cache of the file header might hold contents that did not match the local file, resulting in "Broken segment" warnings in queries.

    • Fix ingest bug where under some circumstances we would reverse the order of events in a batch.

Humio Server 1.36.3 LTS (2022-04-27)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.36.3LTS2022-04-27

Cloud

2023-01-31No1.26.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.36.3/server-1.36.3.tar.gz

These notes include entries from the following previous releases: 1.36.0, 1.36.1, 1.36.2

Updated dependencies with security fixes and weakness.

New features and improvements

  • UI Changes

    • New feature to select text in the search page event list and include/exclude that in the search query.

    • Improved dark mode toggle button's accessibility.

    • Disable the option to creating a view if the user does not have Connect a view permission on any repository. This is more intuitive than getting an empty dropdown of repositories to choose from.

    • Improved accessibility when choosing a theme.

    • Allow more dialogs in the UI to be closed with the Esc key.

    • Added ability to resize search page query field by dragging or fitting to query.

    • Time Selector is now accessible by keyboard.

    • Hovering over text within a query now shows the result of interpreting escape characters.

    • New dialogs for creation of parsers and dashboards.

  • GraphQL API

    • Improved the error messages when the GraphQL queries SearchDomain.alert, SearchDomain.action, and SearchDomain.savedQuery do not find the entity with the given ID.

  • Configuration

    • Added the config CORS_ALLOWED_ORIGINS a comma separated list for CORS allowed origins, default allows all origins.

    • Added INITIAL_FEATURE_FLAGS which lets you enable/disable feature flags on startup. For instance, setting

      INITIAL_FEATURE_FLAGS=+UserRoles,-UsagePage

      Enables UserRoles and disables UsagePage.

    • Make ZOOKEEPER_URL optional. When not set, the zookeeper-status-logger job does not run, and the cluster administration page does not display information about a ZooKeeper cluster.

    • New configuration BUCKET_STORAGE_MULTIPLE_ENDPOINTS and many configurations using STORAGE_2 as prefix. See Bucket Storage.

    • When using ZOOKEEPER_URL_FOR_NODE_UUID for assignment of node ID to Humio nodes, and value of ZOOKEEPER_PREFIX_FOR_NODE_UUID (default /humio_autouuid) does not match contents of local UUID file, acquire a fresh node uuid.

  • Functions

    • Added job which will periodically run a query and record how long it took. By default the query is count().

    • Added a limit parameter to the fieldstats() function. This parameter limits the number of fields to include in the result.

  • Other

    • Added option to specify an IP Filter for which addresses hostname verification should not be made.

    • Added granular IP Filter support for shared dashboards (BETA - API only).

    • Added analytics on query language feature use to the audit-log under the fields queryParserMetrics.

    • Allow the query scheduler to enqueue segments and aux files for download from bucket storage more regularly. This should ensure that queries fetching small aux files can more reliably keep the download job busy.

    • Remove caching of API calls to prevent caching of potential sensitive data.

    • Added warning logs when errors are rendered to browser during OAuth flows.

    • Added exceptions to the Humio logs from AlertJob and ScheduledSearchJob.

    • Added ability to override max auto shard count for a specific repository.

    • Improved the default permissions on the group page by leaving their view expanded once the user cancels update.

    • Allow the same view name across organizations.

    • Improved caching of UI static assets.

    • Improved the error message when an ingest request times out.

    • Added a job that scans segments which are waiting to be archived, this value is recorded in the metric:s3-archiving-latency-max.

    • Improved Humio's detection of Kafka resets. We now load the Kafka cluster id once on boot. If it changes after that, the node will crash.

    • Improved usability of the groups page.

Fixed in this release

  • Summary

    • Use latest version of Java 1.13 on Docker image.

    • Use latest version of Alpine on Docker image.

    • Reading hashfilter in chunks to avoid having huge off heap buffers.

    • Updated dependencies to Jackson to fix a vulnerability.

    • Performance improvements of IngestPartitionCoordinator.

    • Updated java script dependencies to fix vulnerabilities.

    • Improve the performance of deletes from global.

    • Improved off-heap memory handling. Humio now typically uses only 1 GB on systems with 32 vCPUs, down from typically 16 GB. This leaves more memory for other processes and page cache for data.

    • Downgrade to Java 1.13 on Docker image to fix rare cases of JVM crashes.

  • UI Changes

    • For HTTP Event Collector (HEC) the input field sourcetype is now also stored in @sourcetype.

    • Remove script-src: unsafe-eval from content security policy.

    • Removed a spurious warning log when requesting a non-existent hash file from S3.

    • The action message templates {events_str} and {query_result_summary} always evaluate to the same string. To reflect this, the UI has been updated so that these templates are combined into the same item in the template overview for Email, Slack and Webhook actions.

    • Fixed an issue where the SegmentMoverJob could delete the local copy of a segment, if a pending download of the segment failed the CRC check. The job will now keep the downloaded file at a temporary path until the CRC check completes, to avoid deleting a local copy created by other jobs, e.g. by bucket downloads.

    • The query endpoint API now supports languageVersion for specifying Humio query language versions.

    • Fixed a compatibility issue with Filebeat 7.16.0.

    • Make writes to Kafka's chatter topic block in a similar manner as writes to global.

    • Fixed an issue where top would fail if the sum of the values exceeded 2^63-1. Exceeding sums are now pegged to 2^63-1.

    • When bootstrapping a new cluster, set the cluster version in global right away. Since nodes will not boot on a snapshot that doesn't specify a cluster version, it is important that this field exists in all snapshots.

    • Reenable a feature to make Humio delete local copies of bucketed segments, even if they are involved in a query.

    • Fixed an issue where repeating queries could cause other queries to fail.

    • Fixed an issue in the Table widget. It will no longer insert 0-values for missing fields in integer columns. Empty fields will be shown consistently, independent of the column data type.

    • The /hec endpoint no longer responds to OPTIONS requests saying it supports GET requests. It doesn't and never has.

    • Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.

    • Make Humio handle missing aux files a little faster when downloading segments from bucket storage.

    • Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.

    • Fixed an issue in the Export to file dialog on the search page. It is now possible to export fields with spaces.

    • The repository/.../query endpoint now returns a status code of .0 (BadRequest) when given an invalid query in some cases where previously it returned 503 (ServiceUnavailable).

    • Fixed an issue where the Humio query URLs sent by actions would land users on the search page in editing mode for the alert or scheduled search that had triggered. Now, they still land on the search page, but not in editing mode.

    • Fixed a race condition that could cause digesters to calculate two different offsets during startup when determining where to start consuming, and which partially written segments to discard, which could lead to data loss when partially written segments were replayed from Kafka.

    • Queries on views no longer restart when the ordering of the view's connections is changed.

    • Fixed an issue where queries of the form #someTagField != someValue ... would sometimes produce incorrect results.

    • Code completion in the query editor now also works on the right hand side of :=.

    • Fixed an issue where MaxMind databases would only update if a license was present at startup and not if it was added later.

    • Fixed session() such that it works when events arrive out of time order.

    • Fixed an issue that repeatedly tried to restart live queries from a given user upon the deletion of the user.

    • Fixed an issue where live queries would sometimes double-count parts of the historic data.

    • When interacting with the REST API for files, errors now have detailed error messages.

    • Fixed an issue where, if a custom parser was overriding a built-in parser, then the custom parser could accidentally be overwritten by creating a new parser with the same name.

    • From the alerts overview and the scheduled searches overview, it is now possible to clear the error status on an alert or a scheduled search.

    • Errors on alerts are now cleared more granularly. Errors when starting the alert query are cleared as soon as the query is successfully started, errors from polling the query are cleared when the query is successfully polled, and errors from invoking actions are cleared when at least one action has been successfully triggered.

    • Reduce noise in the log when the bucket storage upload job attempts to upload a file that is deleted concurrently.

    • Errors on scheduled searches are now cleared more granularly. Errors when starting a query are cleared as soon as another query is successfully started, errors from polling a query are cleared when a query is successfully polled, and errors from invoking actions are cleared when at least one action has been successfully triggered.

    • No longer allow organization- and system-level ingest tokens to ingest into sandbox and system repos.

    • Reenable a feature to make Humio fetch and check hash files from bucket storage before fetching the segments.

    • No longer allow requests to /hec to specify organizations by name. We now only accept IDs.

    • SAML and OIDC only - During signout, Humio background tabs will be redirected to a signout landing page instead of to the login page.

    • Humio now tries to avoid interrupting threads during shutdown, instead allowing them to finish their work. This should reduce log noise when shutting down.

    • The AlertJob and ScheduledSearchJob now only log validation errors from running the queries as warnings, previously, some of these were logged as errors.

    • Fixed an issue where nodes could request partitions from the query partitioning table that were not present.

    • When starting ingest, Humio checks that the computed starting position in Kafka is below the Kafka end offset. Ensure that the end offset is requested after the starting position is computed, not before. This might prevent a very rare spurious boot failure.

    • Fixed Humio always reading and discarding an already processed message from the ingest queue on boot.

    • Fixed a number of instability issues in the query scheduler. The scheduler should now more reliably ensure that each query either completes, or is cancelled.

    • Bumped the Humio Docker containers to Java 17. If you manually set any --add-opens flags in your JVM config, you should remove them. The container should set the right flags automatically.

    • Fixed an issue where the digest coordinator could consider a host to be alive if the coordinator hadn't seen any timestamps from that host.

    • When creating ingest and chatter topic, reduce desired max.message.bytes to what Kafka cluster allows, if that is lower than our desired values.

  • Queries

    • Query partition tables updates are now rejected if written by a node that is no longer the cluster leader.

  • Other

    • Fixed a race condition between nodes creating the merge result for the same target segment, and also transferring it among the nodes concurrently. If a query read the file during that race condition, an in-memory cache of the file header might hold contents that did not match the local file, resulting in "Broken segment" warnings in queries.

    • Fix ingest bug where under some circumstances we would reverse the order of events in a batch.

Humio Server 1.36.2 LTS (2022-03-01)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.36.2LTS2022-03-01

Cloud

2023-01-31No1.26.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.36.2/server-1.36.2.tar.gz

These notes include entries from the following previous releases: 1.36.0, 1.36.1

Performance and stability improvements.

New features and improvements

  • UI Changes

    • New feature to select text in the search page event list and include/exclude that in the search query.

    • Improved dark mode toggle button's accessibility.

    • Disable the option to creating a view if the user does not have Connect a view permission on any repository. This is more intuitive than getting an empty dropdown of repositories to choose from.

    • Improved accessibility when choosing a theme.

    • Allow more dialogs in the UI to be closed with the Esc key.

    • Added ability to resize search page query field by dragging or fitting to query.

    • Time Selector is now accessible by keyboard.

    • Hovering over text within a query now shows the result of interpreting escape characters.

    • New dialogs for creation of parsers and dashboards.

  • GraphQL API

    • Improved the error messages when the GraphQL queries SearchDomain.alert, SearchDomain.action, and SearchDomain.savedQuery do not find the entity with the given ID.

  • Configuration

    • Added the config CORS_ALLOWED_ORIGINS a comma separated list for CORS allowed origins, default allows all origins.

    • Added INITIAL_FEATURE_FLAGS which lets you enable/disable feature flags on startup. For instance, setting

      INITIAL_FEATURE_FLAGS=+UserRoles,-UsagePage

      Enables UserRoles and disables UsagePage.

    • Make ZOOKEEPER_URL optional. When not set, the zookeeper-status-logger job does not run, and the cluster administration page does not display information about a ZooKeeper cluster.

    • New configuration BUCKET_STORAGE_MULTIPLE_ENDPOINTS and many configurations using STORAGE_2 as prefix. See Bucket Storage.

    • When using ZOOKEEPER_URL_FOR_NODE_UUID for assignment of node ID to Humio nodes, and value of ZOOKEEPER_PREFIX_FOR_NODE_UUID (default /humio_autouuid) does not match contents of local UUID file, acquire a fresh node uuid.

  • Functions

    • Added job which will periodically run a query and record how long it took. By default the query is count().

    • Added a limit parameter to the fieldstats() function. This parameter limits the number of fields to include in the result.

  • Other

    • Added option to specify an IP Filter for which addresses hostname verification should not be made.

    • Added granular IP Filter support for shared dashboards (BETA - API only).

    • Added analytics on query language feature use to the audit-log under the fields queryParserMetrics.

    • Allow the query scheduler to enqueue segments and aux files for download from bucket storage more regularly. This should ensure that queries fetching small aux files can more reliably keep the download job busy.

    • Remove caching of API calls to prevent caching of potential sensitive data.

    • Added warning logs when errors are rendered to browser during OAuth flows.

    • Added exceptions to the Humio logs from AlertJob and ScheduledSearchJob.

    • Added ability to override max auto shard count for a specific repository.

    • Improved the default permissions on the group page by leaving their view expanded once the user cancels update.

    • Allow the same view name across organizations.

    • Improved caching of UI static assets.

    • Improved the error message when an ingest request times out.

    • Added a job that scans segments which are waiting to be archived, this value is recorded in the metric:s3-archiving-latency-max.

    • Improved Humio's detection of Kafka resets. We now load the Kafka cluster id once on boot. If it changes after that, the node will crash.

    • Improved usability of the groups page.

Fixed in this release

  • Summary

    • Reading hashfilter in chunks to avoid having huge off heap buffers.

    • Performance improvements of IngestPartitionCoordinator.

    • Improve the performance of deletes from global.

    • Improved off-heap memory handling. Humio now typically uses only 1 GB on systems with 32 vCPUs, down from typically 16 GB. This leaves more memory for other processes and page cache for data.

    • Downgrade to Java 1.13 on Docker image to fix rare cases of JVM crashes.

  • UI Changes

    • For HTTP Event Collector (HEC) the input field sourcetype is now also stored in @sourcetype.

    • Remove script-src: unsafe-eval from content security policy.

    • Removed a spurious warning log when requesting a non-existent hash file from S3.

    • The action message templates {events_str} and {query_result_summary} always evaluate to the same string. To reflect this, the UI has been updated so that these templates are combined into the same item in the template overview for Email, Slack and Webhook actions.

    • Fixed an issue where the SegmentMoverJob could delete the local copy of a segment, if a pending download of the segment failed the CRC check. The job will now keep the downloaded file at a temporary path until the CRC check completes, to avoid deleting a local copy created by other jobs, e.g. by bucket downloads.

    • The query endpoint API now supports languageVersion for specifying Humio query language versions.

    • Fixed a compatibility issue with Filebeat 7.16.0.

    • Make writes to Kafka's chatter topic block in a similar manner as writes to global.

    • Fixed an issue where top would fail if the sum of the values exceeded 2^63-1. Exceeding sums are now pegged to 2^63-1.

    • When bootstrapping a new cluster, set the cluster version in global right away. Since nodes will not boot on a snapshot that doesn't specify a cluster version, it is important that this field exists in all snapshots.

    • Reenable a feature to make Humio delete local copies of bucketed segments, even if they are involved in a query.

    • Fixed an issue where repeating queries could cause other queries to fail.

    • Fixed an issue in the Table widget. It will no longer insert 0-values for missing fields in integer columns. Empty fields will be shown consistently, independent of the column data type.

    • The /hec endpoint no longer responds to OPTIONS requests saying it supports GET requests. It doesn't and never has.

    • Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.

    • Make Humio handle missing aux files a little faster when downloading segments from bucket storage.

    • Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.

    • Fixed an issue in the Export to file dialog on the search page. It is now possible to export fields with spaces.

    • The repository/.../query endpoint now returns a status code of .0 (BadRequest) when given an invalid query in some cases where previously it returned 503 (ServiceUnavailable).

    • Fixed an issue where the Humio query URLs sent by actions would land users on the search page in editing mode for the alert or scheduled search that had triggered. Now, they still land on the search page, but not in editing mode.

    • Fixed a race condition that could cause digesters to calculate two different offsets during startup when determining where to start consuming, and which partially written segments to discard, which could lead to data loss when partially written segments were replayed from Kafka.

    • Queries on views no longer restart when the ordering of the view's connections is changed.

    • Fixed an issue where queries of the form #someTagField != someValue ... would sometimes produce incorrect results.

    • Code completion in the query editor now also works on the right hand side of :=.

    • Fixed an issue where MaxMind databases would only update if a license was present at startup and not if it was added later.

    • Fixed session() such that it works when events arrive out of time order.

    • Fixed an issue that repeatedly tried to restart live queries from a given user upon the deletion of the user.

    • Fixed an issue where live queries would sometimes double-count parts of the historic data.

    • When interacting with the REST API for files, errors now have detailed error messages.

    • Fixed an issue where, if a custom parser was overriding a built-in parser, then the custom parser could accidentally be overwritten by creating a new parser with the same name.

    • From the alerts overview and the scheduled searches overview, it is now possible to clear the error status on an alert or a scheduled search.

    • Errors on alerts are now cleared more granularly. Errors when starting the alert query are cleared as soon as the query is successfully started, errors from polling the query are cleared when the query is successfully polled, and errors from invoking actions are cleared when at least one action has been successfully triggered.

    • Reduce noise in the log when the bucket storage upload job attempts to upload a file that is deleted concurrently.

    • Errors on scheduled searches are now cleared more granularly. Errors when starting a query are cleared as soon as another query is successfully started, errors from polling a query are cleared when a query is successfully polled, and errors from invoking actions are cleared when at least one action has been successfully triggered.

    • No longer allow organization- and system-level ingest tokens to ingest into sandbox and system repos.

    • Reenable a feature to make Humio fetch and check hash files from bucket storage before fetching the segments.

    • No longer allow requests to /hec to specify organizations by name. We now only accept IDs.

    • SAML and OIDC only - During signout, Humio background tabs will be redirected to a signout landing page instead of to the login page.

    • Humio now tries to avoid interrupting threads during shutdown, instead allowing them to finish their work. This should reduce log noise when shutting down.

    • The AlertJob and ScheduledSearchJob now only log validation errors from running the queries as warnings, previously, some of these were logged as errors.

    • Fixed an issue where nodes could request partitions from the query partitioning table that were not present.

    • When starting ingest, Humio checks that the computed starting position in Kafka is below the Kafka end offset. Ensure that the end offset is requested after the starting position is computed, not before. This might prevent a very rare spurious boot failure.

    • Fixed Humio always reading and discarding an already processed message from the ingest queue on boot.

    • Fixed a number of instability issues in the query scheduler. The scheduler should now more reliably ensure that each query either completes, or is cancelled.

    • Bumped the Humio Docker containers to Java 17. If you manually set any --add-opens flags in your JVM config, you should remove them. The container should set the right flags automatically.

    • Fixed an issue where the digest coordinator could consider a host to be alive if the coordinator hadn't seen any timestamps from that host.

    • When creating ingest and chatter topic, reduce desired max.message.bytes to what Kafka cluster allows, if that is lower than our desired values.

  • Queries

    • Query partition tables updates are now rejected if written by a node that is no longer the cluster leader.

  • Other

    • Fixed a race condition between nodes creating the merge result for the same target segment, and also transferring it among the nodes concurrently. If a query read the file during that race condition, an in-memory cache of the file header might hold contents that did not match the local file, resulting in "Broken segment" warnings in queries.

Humio Server 1.36.1 LTS (2022-02-14)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.36.1LTS2022-02-14

Cloud

2023-01-31No1.26.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.36.1/server-1.36.1.tar.gz

These notes include entries from the following previous releases: 1.36.0

Performance and stability improvements.

New features and improvements

  • UI Changes

    • New feature to select text in the search page event list and include/exclude that in the search query.

    • Improved dark mode toggle button's accessibility.

    • Disable the option to creating a view if the user does not have Connect a view permission on any repository. This is more intuitive than getting an empty dropdown of repositories to choose from.

    • Improved accessibility when choosing a theme.

    • Allow more dialogs in the UI to be closed with the Esc key.

    • Added ability to resize search page query field by dragging or fitting to query.

    • Time Selector is now accessible by keyboard.

    • Hovering over text within a query now shows the result of interpreting escape characters.

    • New dialogs for creation of parsers and dashboards.

  • GraphQL API

    • Improved the error messages when the GraphQL queries SearchDomain.alert, SearchDomain.action, and SearchDomain.savedQuery do not find the entity with the given ID.

  • Configuration

    • Added the config CORS_ALLOWED_ORIGINS a comma separated list for CORS allowed origins, default allows all origins.

    • Added INITIAL_FEATURE_FLAGS which lets you enable/disable feature flags on startup. For instance, setting

      INITIAL_FEATURE_FLAGS=+UserRoles,-UsagePage

      Enables UserRoles and disables UsagePage.

    • Make ZOOKEEPER_URL optional. When not set, the zookeeper-status-logger job does not run, and the cluster administration page does not display information about a ZooKeeper cluster.

    • New configuration BUCKET_STORAGE_MULTIPLE_ENDPOINTS and many configurations using STORAGE_2 as prefix. See Bucket Storage.

    • When using ZOOKEEPER_URL_FOR_NODE_UUID for assignment of node ID to Humio nodes, and value of ZOOKEEPER_PREFIX_FOR_NODE_UUID (default /humio_autouuid) does not match contents of local UUID file, acquire a fresh node uuid.

  • Functions

    • Added job which will periodically run a query and record how long it took. By default the query is count().

    • Added a limit parameter to the fieldstats() function. This parameter limits the number of fields to include in the result.

  • Other

    • Added option to specify an IP Filter for which addresses hostname verification should not be made.

    • Added granular IP Filter support for shared dashboards (BETA - API only).

    • Added analytics on query language feature use to the audit-log under the fields queryParserMetrics.

    • Allow the query scheduler to enqueue segments and aux files for download from bucket storage more regularly. This should ensure that queries fetching small aux files can more reliably keep the download job busy.

    • Remove caching of API calls to prevent caching of potential sensitive data.

    • Added warning logs when errors are rendered to browser during OAuth flows.

    • Added exceptions to the Humio logs from AlertJob and ScheduledSearchJob.

    • Added ability to override max auto shard count for a specific repository.

    • Improved the default permissions on the group page by leaving their view expanded once the user cancels update.

    • Allow the same view name across organizations.

    • Improved caching of UI static assets.

    • Improved the error message when an ingest request times out.

    • Added a job that scans segments which are waiting to be archived, this value is recorded in the metric:s3-archiving-latency-max.

    • Improved Humio's detection of Kafka resets. We now load the Kafka cluster id once on boot. If it changes after that, the node will crash.

    • Improved usability of the groups page.

Fixed in this release

  • Summary

    • Reading hashfilter in chunks to avoid having huge off heap buffers.

    • Performance improvements of IngestPartitionCoordinator.

    • Improve the performance of deletes from global.

    • Downgrade to Java 1.13 on Docker image to fix rare cases of JVM crashes.

  • UI Changes

    • For HTTP Event Collector (HEC) the input field sourcetype is now also stored in @sourcetype.

    • Remove script-src: unsafe-eval from content security policy.

    • Removed a spurious warning log when requesting a non-existent hash file from S3.

    • The action message templates {events_str} and {query_result_summary} always evaluate to the same string. To reflect this, the UI has been updated so that these templates are combined into the same item in the template overview for Email, Slack and Webhook actions.

    • Fixed an issue where the SegmentMoverJob could delete the local copy of a segment, if a pending download of the segment failed the CRC check. The job will now keep the downloaded file at a temporary path until the CRC check completes, to avoid deleting a local copy created by other jobs, e.g. by bucket downloads.

    • The query endpoint API now supports languageVersion for specifying Humio query language versions.

    • Fixed a compatibility issue with Filebeat 7.16.0.

    • Make writes to Kafka's chatter topic block in a similar manner as writes to global.

    • Fixed an issue where top would fail if the sum of the values exceeded 2^63-1. Exceeding sums are now pegged to 2^63-1.

    • When bootstrapping a new cluster, set the cluster version in global right away. Since nodes will not boot on a snapshot that doesn't specify a cluster version, it is important that this field exists in all snapshots.

    • Reenable a feature to make Humio delete local copies of bucketed segments, even if they are involved in a query.

    • Fixed an issue where repeating queries could cause other queries to fail.

    • Fixed an issue in the Table widget. It will no longer insert 0-values for missing fields in integer columns. Empty fields will be shown consistently, independent of the column data type.

    • The /hec endpoint no longer responds to OPTIONS requests saying it supports GET requests. It doesn't and never has.

    • Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.

    • Make Humio handle missing aux files a little faster when downloading segments from bucket storage.

    • Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.

    • Fixed an issue in the Export to file dialog on the search page. It is now possible to export fields with spaces.

    • The repository/.../query endpoint now returns a status code of .0 (BadRequest) when given an invalid query in some cases where previously it returned 503 (ServiceUnavailable).

    • Fixed an issue where the Humio query URLs sent by actions would land users on the search page in editing mode for the alert or scheduled search that had triggered. Now, they still land on the search page, but not in editing mode.

    • Fixed a race condition that could cause digesters to calculate two different offsets during startup when determining where to start consuming, and which partially written segments to discard, which could lead to data loss when partially written segments were replayed from Kafka.

    • Queries on views no longer restart when the ordering of the view's connections is changed.

    • Fixed an issue where queries of the form #someTagField != someValue ... would sometimes produce incorrect results.

    • Code completion in the query editor now also works on the right hand side of :=.

    • Fixed an issue where MaxMind databases would only update if a license was present at startup and not if it was added later.

    • Fixed session() such that it works when events arrive out of time order.

    • Fixed an issue that repeatedly tried to restart live queries from a given user upon the deletion of the user.

    • Fixed an issue where live queries would sometimes double-count parts of the historic data.

    • When interacting with the REST API for files, errors now have detailed error messages.

    • Fixed an issue where, if a custom parser was overriding a built-in parser, then the custom parser could accidentally be overwritten by creating a new parser with the same name.

    • From the alerts overview and the scheduled searches overview, it is now possible to clear the error status on an alert or a scheduled search.

    • Errors on alerts are now cleared more granularly. Errors when starting the alert query are cleared as soon as the query is successfully started, errors from polling the query are cleared when the query is successfully polled, and errors from invoking actions are cleared when at least one action has been successfully triggered.

    • Reduce noise in the log when the bucket storage upload job attempts to upload a file that is deleted concurrently.

    • Errors on scheduled searches are now cleared more granularly. Errors when starting a query are cleared as soon as another query is successfully started, errors from polling a query are cleared when a query is successfully polled, and errors from invoking actions are cleared when at least one action has been successfully triggered.

    • No longer allow organization- and system-level ingest tokens to ingest into sandbox and system repos.

    • Reenable a feature to make Humio fetch and check hash files from bucket storage before fetching the segments.

    • No longer allow requests to /hec to specify organizations by name. We now only accept IDs.

    • SAML and OIDC only - During signout, Humio background tabs will be redirected to a signout landing page instead of to the login page.

    • Humio now tries to avoid interrupting threads during shutdown, instead allowing them to finish their work. This should reduce log noise when shutting down.

    • The AlertJob and ScheduledSearchJob now only log validation errors from running the queries as warnings, previously, some of these were logged as errors.

    • Fixed an issue where nodes could request partitions from the query partitioning table that were not present.

    • When starting ingest, Humio checks that the computed starting position in Kafka is below the Kafka end offset. Ensure that the end offset is requested after the starting position is computed, not before. This might prevent a very rare spurious boot failure.

    • Fixed Humio always reading and discarding an already processed message from the ingest queue on boot.

    • Fixed a number of instability issues in the query scheduler. The scheduler should now more reliably ensure that each query either completes, or is cancelled.

    • Bumped the Humio Docker containers to Java 17. If you manually set any --add-opens flags in your JVM config, you should remove them. The container should set the right flags automatically.

    • Fixed an issue where the digest coordinator could consider a host to be alive if the coordinator hadn't seen any timestamps from that host.

    • When creating ingest and chatter topic, reduce desired max.message.bytes to what Kafka cluster allows, if that is lower than our desired values.

  • Queries

    • Query partition tables updates are now rejected if written by a node that is no longer the cluster leader.

Humio Server 1.36.0 LTS (2022-01-31)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.36.0LTS2022-01-31

Cloud

2023-01-31No1.26.0Yes

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.36.0/server-1.36.0.tar.gz

Beta: Bucket storage support for dual targets

Support for dual targets to allow using one as the preferred download and the other to trust for durability. One example of this is to save on cost (on traffic) by using a local bucket implementation, such as MinIO, in the local datacenter as the preferred bucket storage target, while using a remote Amazon S3 bucket as the trusted bucket for durability. If the local MinIO bucket is lost (or just not responding for a while) the Humio cluster still works using the AWS S3 bucket with no reconfiguration or restart required. Configuration of the second bucket is via configuration entries similar to the existing STORAGE keys, but using the prefix STORAGE_2 for the extra bucket.

When using dual targets, bucket storage backends may need different proxy configurations for each backend - or not. The new configuration BUCKET_STORAGE_MULTIPLE_ENDPOINTS (default false) controls whether the proxy configuration in the environment is applied to all bucket storage backends. When set to true, each bucket preserves the active proxy/endpoint configuration and a change to those will trigger creation of a fresh internally persisted bucket storage access configuration.

New features and improvements

  • UI Changes

    • New feature to select text in the search page event list and include/exclude that in the search query.

    • Improved dark mode toggle button's accessibility.

    • Disable the option to creating a view if the user does not have Connect a view permission on any repository. This is more intuitive than getting an empty dropdown of repositories to choose from.

    • Improved accessibility when choosing a theme.

    • Allow more dialogs in the UI to be closed with the Esc key.

    • Added ability to resize search page query field by dragging or fitting to query.

    • Time Selector is now accessible by keyboard.

    • Hovering over text within a query now shows the result of interpreting escape characters.

    • New dialogs for creation of parsers and dashboards.

  • GraphQL API

    • Improved the error messages when the GraphQL queries SearchDomain.alert, SearchDomain.action, and SearchDomain.savedQuery do not find the entity with the given ID.

  • Configuration

    • Added the config CORS_ALLOWED_ORIGINS a comma separated list for CORS allowed origins, default allows all origins.

    • Added INITIAL_FEATURE_FLAGS which lets you enable/disable feature flags on startup. For instance, setting

      INITIAL_FEATURE_FLAGS=+UserRoles,-UsagePage

      Enables UserRoles and disables UsagePage.

    • Make ZOOKEEPER_URL optional. When not set, the zookeeper-status-logger job does not run, and the cluster administration page does not display information about a ZooKeeper cluster.

    • New configuration BUCKET_STORAGE_MULTIPLE_ENDPOINTS and many configurations using STORAGE_2 as prefix. See Bucket Storage.

    • When using ZOOKEEPER_URL_FOR_NODE_UUID for assignment of node ID to Humio nodes, and value of ZOOKEEPER_PREFIX_FOR_NODE_UUID (default /humio_autouuid) does not match contents of local UUID file, acquire a fresh node uuid.

  • Functions

    • Added job which will periodically run a query and record how long it took. By default the query is count().

    • Added a limit parameter to the fieldstats() function. This parameter limits the number of fields to include in the result.

  • Other

    • Added option to specify an IP Filter for which addresses hostname verification should not be made.

    • Added granular IP Filter support for shared dashboards (BETA - API only).

    • Added analytics on query language feature use to the audit-log under the fields queryParserMetrics.

    • Allow the query scheduler to enqueue segments and aux files for download from bucket storage more regularly. This should ensure that queries fetching small aux files can more reliably keep the download job busy.

    • Remove caching of API calls to prevent caching of potential sensitive data.

    • Added warning logs when errors are rendered to browser during OAuth flows.

    • Added exceptions to the Humio logs from AlertJob and ScheduledSearchJob.

    • Added ability to override max auto shard count for a specific repository.

    • Improved the default permissions on the group page by leaving their view expanded once the user cancels update.

    • Allow the same view name across organizations.

    • Improved caching of UI static assets.

    • Improved the error message when an ingest request times out.

    • Added a job that scans segments which are waiting to be archived, this value is recorded in the metric:s3-archiving-latency-max.

    • Improved Humio's detection of Kafka resets. We now load the Kafka cluster id once on boot. If it changes after that, the node will crash.

    • Improved usability of the groups page.

Fixed in this release

  • UI Changes

    • For HTTP Event Collector (HEC) the input field sourcetype is now also stored in @sourcetype.

    • Remove script-src: unsafe-eval from content security policy.

    • Removed a spurious warning log when requesting a non-existent hash file from S3.

    • The action message templates {events_str} and {query_result_summary} always evaluate to the same string. To reflect this, the UI has been updated so that these templates are combined into the same item in the template overview for Email, Slack and Webhook actions.

    • Fixed an issue where the SegmentMoverJob could delete the local copy of a segment, if a pending download of the segment failed the CRC check. The job will now keep the downloaded file at a temporary path until the CRC check completes, to avoid deleting a local copy created by other jobs, e.g. by bucket downloads.

    • The query endpoint API now supports languageVersion for specifying Humio query language versions.

    • Fixed a compatibility issue with Filebeat 7.16.0.

    • Make writes to Kafka's chatter topic block in a similar manner as writes to global.

    • Fixed an issue where top would fail if the sum of the values exceeded 2^63-1. Exceeding sums are now pegged to 2^63-1.

    • When bootstrapping a new cluster, set the cluster version in global right away. Since nodes will not boot on a snapshot that doesn't specify a cluster version, it is important that this field exists in all snapshots.

    • Reenable a feature to make Humio delete local copies of bucketed segments, even if they are involved in a query.

    • Fixed an issue where repeating queries could cause other queries to fail.

    • Fixed an issue in the Table widget. It will no longer insert 0-values for missing fields in integer columns. Empty fields will be shown consistently, independent of the column data type.

    • The /hec endpoint no longer responds to OPTIONS requests saying it supports GET requests. It doesn't and never has.

    • Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.

    • Make Humio handle missing aux files a little faster when downloading segments from bucket storage.

    • Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.

    • Fixed an issue in the Export to file dialog on the search page. It is now possible to export fields with spaces.

    • The repository/.../query endpoint now returns a status code of .0 (BadRequest) when given an invalid query in some cases where previously it returned 503 (ServiceUnavailable).

    • Fixed an issue where the Humio query URLs sent by actions would land users on the search page in editing mode for the alert or scheduled search that had triggered. Now, they still land on the search page, but not in editing mode.

    • Fixed a race condition that could cause digesters to calculate two different offsets during startup when determining where to start consuming, and which partially written segments to discard, which could lead to data loss when partially written segments were replayed from Kafka.

    • Queries on views no longer restart when the ordering of the view's connections is changed.

    • Fixed an issue where queries of the form #someTagField != someValue ... would sometimes produce incorrect results.

    • Code completion in the query editor now also works on the right hand side of :=.

    • Fixed an issue where MaxMind databases would only update if a license was present at startup and not if it was added later.

    • Fixed session() such that it works when events arrive out of time order.

    • Fixed an issue that repeatedly tried to restart live queries from a given user upon the deletion of the user.

    • Fixed an issue where live queries would sometimes double-count parts of the historic data.

    • When interacting with the REST API for files, errors now have detailed error messages.

    • Fixed an issue where, if a custom parser was overriding a built-in parser, then the custom parser could accidentally be overwritten by creating a new parser with the same name.

    • From the alerts overview and the scheduled searches overview, it is now possible to clear the error status on an alert or a scheduled search.

    • Errors on alerts are now cleared more granularly. Errors when starting the alert query are cleared as soon as the query is successfully started, errors from polling the query are cleared when the query is successfully polled, and errors from invoking actions are cleared when at least one action has been successfully triggered.

    • Reduce noise in the log when the bucket storage upload job attempts to upload a file that is deleted concurrently.

    • Errors on scheduled searches are now cleared more granularly. Errors when starting a query are cleared as soon as another query is successfully started, errors from polling a query are cleared when a query is successfully polled, and errors from invoking actions are cleared when at least one action has been successfully triggered.

    • No longer allow organization- and system-level ingest tokens to ingest into sandbox and system repos.

    • Reenable a feature to make Humio fetch and check hash files from bucket storage before fetching the segments.

    • No longer allow requests to /hec to specify organizations by name. We now only accept IDs.

    • SAML and OIDC only - During signout, Humio background tabs will be redirected to a signout landing page instead of to the login page.

    • Humio now tries to avoid interrupting threads during shutdown, instead allowing them to finish their work. This should reduce log noise when shutting down.

    • The AlertJob and ScheduledSearchJob now only log validation errors from running the queries as warnings, previously, some of these were logged as errors.

    • Fixed an issue where nodes could request partitions from the query partitioning table that were not present.

    • When starting ingest, Humio checks that the computed starting position in Kafka is below the Kafka end offset. Ensure that the end offset is requested after the starting position is computed, not before. This might prevent a very rare spurious boot failure.

    • Fixed Humio always reading and discarding an already processed message from the ingest queue on boot.

    • Fixed a number of instability issues in the query scheduler. The scheduler should now more reliably ensure that each query either completes, or is cancelled.

    • Bumped the Humio Docker containers to Java 17. If you manually set any --add-opens flags in your JVM config, you should remove them. The container should set the right flags automatically.

    • Fixed an issue where the digest coordinator could consider a host to be alive if the coordinator hadn't seen any timestamps from that host.

    • When creating ingest and chatter topic, reduce desired max.message.bytes to what Kafka cluster allows, if that is lower than our desired values.

  • Queries

    • Query partition tables updates are now rejected if written by a node that is no longer the cluster leader.

Humio Server 1.35.0 GA (2022-01-17)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.35.0GA2022-01-17

Cloud

2023-01-31No1.26.0Yes

Available for download two days after release.

Hide file hashes

Show file hashes

Beta: Bucket storage support for dual targets

Support for dual targets to allow using one as the preferred download and the other to trust for durability. One example of this is to save on cost (on traffic) by using a local bucket implementation, such as MinIO, in the local datacenter as the preferred bucket storage target, while using a remote Amazon S3 bucket as the trusted bucket for durability. If the local MinIO bucket is lost (or just not responding for a while) the Humio cluster still works using the AWS S3 bucket with no reconfiguration or restart required. Configuration of the second bucket is via configuration entries similar to the existing STORAGE keys, but using the prefix STORAGE_2 for the extra bucket.

When using dual targets, bucket storage backends may need different proxy configurations for each backend - or not. The new configuration BUCKET_STORAGE_MULTIPLE_ENDPOINTS (default false) controls whether the proxy configuration in the environment is applied to all bucket storage backends. When set to true, each bucket preserves the active proxy/endpoint configuration and a change to those will trigger creation of a fresh internally persisted bucket storage access configuration.

New features and improvements

  • UI Changes

    • Added ability to resize search page query field by dragging or fitting to query.

    • Allow more dialogs in the UI to be closed with the Esc key.

    • New dialogs for creation of parsers and dashboards.

    • Improved accessibility when choosing a theme.

    • New feature to select text in the search page event list and include/exclude that in the search query.

    • Time Selector is now accessible by keyboard.

    • Improved dark mode toggle button's accessibility.

    • Disable the option to creating a view if the user does not have Connect a view permission on any repository. This is more intuitive than getting an empty dropdown of repositories to choose from.

    • Hovering over text within a query now shows the result of interpreting escape characters.

  • GraphQL API

    • Improved the error messages when the GraphQL queries SearchDomain.alert, SearchDomain.action, and SearchDomain.savedQuery do not find the entity with the given ID.

  • Configuration

    • Make ZOOKEEPER_URL optional. When not set, the zookeeper-status-logger job does not run, and the cluster administration page does not display information about a ZooKeeper cluster.

    • Added INITIAL_FEATURE_FLAGS which lets you enable/disable feature flags on startup. For instance, setting

      INITIAL_FEATURE_FLAGS=+UserRoles,-UsagePage enables

      UserRoles and disables UsagePage.

    • When using ZOOKEEPER_URL_FOR_NODE_UUID for assignment of node ID to Humio nodes, and value of ZOOKEEPER_PREFIX_FOR_NODE_UUID (default /humio_autouuid) does not match contents of local UUID file, acquire a fresh node uuid.

    • New configuration BUCKET_STORAGE_MULTIPLE_ENDPOINTS and many configurations using STORAGE_2 as prefix. See Bucket Storage

    • Reduce default value of INGESTQUEUE_COMPRESSION_LEVEL, the ingest queue compression level from 1 to 0. This reduces time spent compressing before inserting into the ingest queue by roughly 4x at the expense of a 10-20% increase in size required in Kafka for the ingest queue topic.

  • Functions

    • Added a limit parameter to the fieldstats() function. This parameter limits the number of fields to include in the result.

  • Other

    • Allow the same view name across organizations.

    • Improved usability of the groups page.

    • Added warning logs when errors are rendered to browser during OAuth flows.

    • Allow the query scheduler to enqueue segments and aux files for download from bucket storage more regularly. This should ensure that queries fetching small aux files can more reliably keep the download job busy.

    • Improved the default permissions on the group page by leaving their view expanded once the user cancels update.

    • Improved the error message when an ingest request times out.

    • Added granular IP Filter support for shared dashboards (BETA - API only).

    • Added ability to override max auto shard count for a specific repository.

    • Added exceptions to the Humio logs from AlertJob and ScheduledSearchJob.

    • Added a job that scans segments which are waiting to be archived, this value is recorded in the metric:s3-archiving-latency-max.

    • Improved Humio's detection of Kafka resets. We now load the Kafka cluster id once on boot. If it changes after that, the node will crash.

    • Added job which will periodically run a query and record how long it took. By default the query is count().

    • Added option to specify an IP Filter for which addresses hostname verification should not be made.

    • Added analytics on query language feature use to the audit-log under the fields queryParserMetrics.

Fixed in this release

  • UI Changes

    • Fixed session() function such that it works when events arrive out of time order.

    • Fixed an issue in the Export to file dialog on the search page. It is now possible to export fields with spaces.

    • Fixed a compatibility issue with Filebeat 7.16.0.

    • Fixed a number of instability issues in the query scheduler. The scheduler should now more reliably ensure that each query either completes, or is cancelled.

    • Fixed an issue where the digest coordinator could consider a host to be alive if the coordinator hadn't seen any timestamps from that host.

    • Fixed an issue where live queries would sometimes double-count parts of the historic data.

    • Fixed an issue where the Humio query URLs sent by actions would land users on the search page in editing mode for the alert or scheduled search that had triggered. Now, they still land on the search page, but not in editing mode.

    • Remove script-src: unsafe-eval from content security policy.

    • Removed a spurious warning log when requesting a non-existent hash file from S3.

    • Errors on scheduled searches are now cleared more granularly. Errors when starting a query are cleared as soon as another query is successfully started, errors from polling a query are cleared when a query is successfully polled, and errors from invoking actions are cleared when at least one action has been successfully triggered.

    • Queries on views no longer restart when the ordering of the view's connections is changed.

    • When starting ingest, Humio checks that the computed starting position in Kafka is below the Kafka end offset. Ensure that the end offset is requested after the starting position is computed, not before. This might prevent a very rare spurious boot failure.

    • Errors on alerts are now cleared more granularly. Errors when starting the alert query are cleared as soon as the query is successfully started, errors from polling the query are cleared when the query is successfully polled, and errors from invoking actions are cleared when at least one action has been successfully triggered.

    • Fixed an issue where, if a custom parser was overriding a built-in parser, then the custom parser could accidentally be overwritten by creating a new parser with the same name.

    • The /hec endpoint no longer responds to OPTIONS requests saying it supports GET requests. It doesn't and never has.

    • Humio now tries to avoid interrupting threads during shutdown, instead allowing them to finish their work. This should reduce log noise when shutting down.

    • Reduce noise in the log when the bucket storage upload job attempts to upload a file that is deleted concurrently.

    • The action message templates {events_str} and {query_result_summary} always evaluate to the same string. To reflect this, the UI has been updated so that these templates are combined into the same item in the template overview for Email, Slack and Webhook actions.

    • Fixed an issue where nodes could request partitions from the query partitioning table that were not present.

    • Make writes to Kafka's chatter topic block in a similar manner as writes to global.

    • Fixed an issue where repeating queries could cause other queries to fail.

    • From the alerts overview and the scheduled searches overview, it is now possible to clear the error status on an alert or a scheduled search.

    • Fixed an issue in the Table widget. It will no longer insert 0-values for missing fields in integer columns. Empty fields will be shown consistently, independent of the column data type.

    • Bumped the Humio Docker containers to Java 17. If you manually set any --add-opens flags in your JVM config, you should remove them. The container should set the right flags automatically.

    • The AlertJob and ScheduledSearchJob now only log validation errors from running the queries as warnings, previously, some of these were logged as errors.

    • SAML and OIDC only - During signout, Humio background tabs will be redirected to a signout landing page instead of to the login page.

    • Fixed an issue that repeatedly tried to restart live queries from a given user upon the deletion of the user.

    • No longer allow requests to /hec to specify organizations by name. We now only accept IDs.

    • Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.

    • Fixed a race condition that could cause digesters to calculate two different offsets during startup when determining where to start consuming, and which partially written segments to discard, which could lead to data loss when partially written segments were replayed from Kafka.

    • When bootstrapping a new cluster, set the cluster version in global right away. Since nodes will not boot on a snapshot that doesn't specify a cluster version, it is important that this field exists in all snapshots.

    • Fixed Humio always reading and discarding an already processed message from the ingest queue on boot.

    • For HTTP Event Collector (HEC) the input field sourcetype is now also stored in @sourcetype.

    • Fixed an issue where MaxMind databases would only update if a license was present at startup and not if it was added later.

    • Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.

    • Reenable a feature to make Humio delete local copies of bucketed segments, even if they are involved in a query.

    • Code completion in the query editor now also works on the right hand side of :=.

    • No longer allow organization- and system-level ingest tokens to ingest into sandbox and system repos.

    • The query endpoint API now supports languageVersion for specifying Humio query language versions.

    • Fixed an issue where the SegmentMoverJob could delete the local copy of a segment, if a pending download of the segment failed the CRC check. The job will now keep the downloaded file at a temporary path until the CRC check completes, to avoid deleting a local copy created by other jobs, e.g. by bucket downloads.

    • The repository/.../query endpoint now returns a status code of .0 (BadRequest) when given an invalid query in some cases where previously it returned 503 (ServiceUnavailable).

    • Reenable a feature to make Humio fetch and check hash files from bucket storage before fetching the segments.

    • When creating ingest and chatter topic, reduce desired max.message.bytes to what Kafka cluster allows, if that is lower than our desired values.

    • Make Humio handle missing aux files a little faster when downloading segments from bucket storage.

    • Fixed an issue where top would fail if the sum of the values exceeded 2^63-1. Exceeding sums are now pegged to 2^63-1.

  • Queries

    • Query partition tables updates are now rejected if written by a node that is no longer the cluster leader.

Humio Server 1.34.3 LTS (2022-03-09)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.34.3LTS2022-03-09

Cloud

2022-12-31No1.26.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.34.3/server-1.34.3.tar.gz

These notes include entries from the following previous releases: 1.34.0, 1.34.1, 1.34.2

Performance improvements of Ingest and internal caching.

New features and improvements

  • UI Changes

    • Added autofocus to the first field when opening a dialog using the save as functionality from the Search page.

    • Added buttons for stopping all queries, streaming queries, and historical queries from inside the query monitor.

    • Allow resize of columns in the event list by mouse.

    • Disable actions if permissions are handled externally.

    • Added maximum width to tabs on the Group page, so they do not keep expanding forever.

    • Validation error messages are now more precise and have improved formatting.

    • The overall look of message boxes in Humio has been updated.

    • Updated the links for Privacy Notice and Terms and Conditions.

    • Dark mode is officially deemed stable enough to be out of beta.

  • GraphQL API

    • The GraphQL field isEventForwardingEnabled on the HumioMetadata type is deprecated, as it is no longer in use internally. If you rely on this, please let us know.

    • Added three GraphQL mutations for stopping queries: stopAllQueries, stopStreamingQueries, and stopHistoricalQueries.

    • Added GraphQL mutation clearRecentQueries which a user can run to clear their recent queries in a specific view or repository.

    • Added 2-phase migration that will allow old user api tokens to be used and clean global from secrets after a 30 day period.

    • Changed old personal user token implementation to hash based.

    • Renamed the deleteEvents related GraphQL mutations and queries to redactEvents. The redactEvents API is intended for redacting sensitive data from a repository, not for bulk deletion of events. We think the new name invites fewer misunderstandings.

  • Configuration

    • When checking if the ViewAction.EventForwarding action is allowed (with e.g. SearchDomain.isActionAllowed), the answer will now be false if the event forwarding is not enabled on the server.

  • Functions

    • Improved performance of the query functions drop() and rename() by quite a bit.

    • Added query function math:arctan2()to the query language.

    • Added the communityId() function for calculating hashes of network flow tuples according to the (Community ID Spec).

    • The kvParse() query function can now parse unquoted empty values using the new parameter separatorPadding to specify if your data has whitespace around the key-value separator (typically =). The default is "Unknown", which will leave the functionality of the function unchanged.

    • Added a minSpan parameter to timeChart() and bucket(), which can be used to specify a minimum span when using a short time interval.

    • Refactored query functions join(), selfJoin(), and selfJoinFilter() into user-visible and internal implementations.

  • Other

    • It is now possible to create actions, alerts, scheduled searches, and parsers from YAML template files.

    • Added new metric: bucket-storage-upload-latency-max. It shows the amount of time spent for the event that that has been pending for upload to bucket storage the longest.

    • It is now possible to ingest logs into Humio using LogStash v.7.13 and upwards.

    • Added a precondition that ensures that the number of ingest partitions cannot be reduced.

    • Added validation and a more clear error message for queries with a time span of 0.

    • Added metric for the number of currently running streaming queries.

    • Improved shutdown logic slightly, helping prevent thread pools from getting stuck or logging spurious errors during shutdown.

    • Made the transfer coordinator display more clear errors instead of an internal server error for multinode clusters.

    • Added Australian states to the States dropdown.

    • New metric: ingest-request-delay. Histogram of ingest request time spent being delayed due to exceeding limit on concurrent processing of ingest (milliseconds).

    • Improved handling of multiple nodes attempting to create views with the same names at the same time, as might happen when bootstrapping a cluster.

    • Improved the error reporting when installing, updating or exporting a package fails.

    • Create, update, and delete of dashboards is now audit logged.

    • Reword regular expression related error messages.

    • Added management API to put hosts in maintenance mode.

    • Improved error messages when an invalid regular expression is used in replace.

    • Retention based on compressed size will no longer account for segment replication.

    • Query validation has been improved to include several errors which used to first show up after submitting search.

    • Prepopulate email field when invited user is filling in a form with this information.

    • Node roles can now be assigned/removed at runtime.

    • Improved partition layout auto-balancing algorithm.

    • Added support in the humio event collector for organization- and system-wide ingest tokens and the ability to use a parser from a different repo than the one being ingested into.

    • A compressed segment with a size of 1GB will now always count for retention as 1 GB. Previously, a compressed segment with a size of 1GB might count for more than 1GB when calculating retention, if that segment had more replicas than configured. The effect on the retention policy was that if you had configured retention of .0GB compressed bytes, Humio might retain less than .0GB of compressed data if any of those segments had too many replicas.

    • Added checksum verification within hash filter files on read.

    • Query editor: improved code completion of function names.

    • Minor optimization when using groupBy with a single field.

    • Added "export as yaml" function to the list pages of parsers, actions and scheduled searches.

    • Reduce limit on number of datasources for sandbox repositories created when a user is created to .0 by default.

Fixed in this release

  • Security

    • Updated dependencies to Jawn for CVE-2022-21653.

    • Updated dependencies to nanoid for CVE-2021-23566.

    • Updated dependencies to Netty to fix CVE-2021-43797

    • Updated dependencies to node-fetch for CVE-2022-0235.

    • Updated dependencies to Akka for CVE-2021-42697.

    • Updated dependencies to follow-redirects for CVE-2022-0155.

    • Updated dependencies to log4j 2.17.1 to fix CVE-2021-44832 and CVE-2021-45105

    • Updated dependencies to log4j 2.16 to remove of message lookups (CVE-2021-44228 and CVE-2021-45046)

  • Summary

    • Performance improvements of Ingest and internal caching.

    • Updated dependencies to Jackson to fix a weakness

    • Fixes an issue with epoch and offsets not always being stripped from segments.

    • Fixed an issue where queries of the form #someTagField != someValue ... would sometimes produce incorrect results.

    • Fixed an issue where live queries would sometimes double-count parts of the historic data.

  • Automation and Alerts

    • Alerts and scheduled searches are no longer run on cloud for organizations with an expired trial license, and on-prem for any expired license.

    • Fixed an issue where an alert would not be throttled until after its actions had completed, which could make the alert trigger multiple times shortly after each other if an action was slow. Now, the alert is throttled as soon as it triggers.

    • Alerts and scheduled searches are now enabled per default when created. The check disabling these entities if no actions are attached has been replaced with a warning, which informs a user that even though the entity is enabled, nothing will trigger since no actions are attached.

  • Functions

  • Other

    • Fixed an issue where the segment merger could mishandle errors during merge.

    • Fixed an issue on on-prem trial license that would use user count limits from cloud.

    • Use a fresh (random) name for the tmp folder below the datadir to ensure that it is a proper subdir of the datadir and not a mount point.

    • Fixed styling issue on the search page where long errors would overflow the screen.

    • Fixed a bug where only part of the Users page was loading when navigating from the All organizations page.

    • When an alert query encounters a warning and Humio is not configured to trigger alerts despite warnings ALERT_DESPITE_WARNINGS=true the warning text will now be shown as an error message on the alert in the UI.

    • Fixed an issue where certain problems would highlight the first word in a query.

    • Addressed an issue causing Humio to sometimes error log an ArrayIndexOutOfBoundsException during shutdown.

    • Fixed incorrect results when searching through saved queries and recent queries.

    • Fixed an issue where streaming (exporting) query results in JSON format could include extra "," characters within the output.

    • Fixed a bug where shared lookup files could not be downloaded from the UI.

    • Fixed a bug with the cache not being populated between restarts on single node clusters.

    • Fixed an issue when adding a group to a repository or view than an error message is displayed when the user is not the organization owner or root.

    • Prevent unauthorized analytics requests being sent.

    • Fixed an issue where error messages would show wrong input.

    • The field vhost in internal Humio logging is now reserved for denoting the host logging the message. Other uses of vhost now uses the field hostId.

    • Removed error query param from URL when entering Humio.

    • Fixed an issue that in rare cases would cause login errors.

    • No longer return the "Query Plan" in responses, but return a hash in the new field hashedQueryOnView instead. The plan could leak information not otherwise visible to the user, such as query prefixes being applied.

    • Fixed some widgets on dashboards reporting errors while waiting for data to load.

    • Fixed an issue where the web client could start queries from the beginning of time when scrolling backwards through events in the UI.

    • Changes to the state of IOC access on organizations are now reflected in the audit log.

    • Fixed an issue where a scheduled search failed and was retried, if it had multiple actions and at least one action was unknown to Humio. Now, the unknown action is logged, but the scheduled search completes successfully and continues to the next scheduled run.

    • When a digester fails to start, rather than restarting the JVM as previous releases did, keep retrying to start assuming that the issue is transient, such as data for a single ingest partition being unavailable for a short while. While in this situation the process reports the metric for ingest latency on the affected partitions as being uptime of the JVM process at this point. The idea is to signal that data is not flowing on those partitions, so that a monitored metric can raise an alarm somewhere. In lack of a proper latency in this situation, it's better to have a growing non-zero metrics than having the metrics being zero.

    • Fixed an issue where missing undersized segments in a datasource might cause Humio to repeatedly transfer undersized segments between nodes.

    • When creating or updating an action, the backend now verifies that the host url associated with the action is prefixed with either http:// or https://. This affects Actions of the type: Webhook, OpsGenie, Single-Channel Slack and VictorOps.

    • Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.

    • Fixed an edge case where Humio might create multiple copies of the same datasource when the number of Kafka partitions is changed. The fix ensures only one copy will be created.

    • Changed field type for zip codes.

    • Fixed a number of stability issues with the event redaction job.

    • Fixed an issue where the segment merger would write that the current node had a segment slightly before registering that segment in the local node.

    • Fixed an issue where clicking on the counters of parsed events on the Parsers page would open an empty search page, except for built-in parsers. Now, it correctly shows the latest parsed events for all parsers (except package parsers).

    • Changed default package type to "application" on the export package wizard.

    • Fixed an issue where sort() would cause events to be read in a non-optimal order for the entire query.

    • Fixed an issue where a dashboard installed with a YAML file could be slightly different than what was specified in the file.

    • Fixed an issue where a failing event forwarder would be cached indefinitely and could negatively impact Humio performance.

    • Fixed an issue where comments spanning multiple lines wouldn't be colored correctly.

    • Browser storage is now cleared when initiating while unauthenticated.

    • Fixed an issue where OIDC without a discovery endpoint would fail to configure if OIDC_TOKEN_ENDPOINT_AUTH_METHOD was not set.

    • Remove the ability to create ingest tokens and ingest listeners on system repositories.

    • When checking if the ViewAction.ChangeS3ArchivingSettings action is allowed (with e.g. SearchDomain.isActionAllowed), the answer will now be false if checked on a view, as the action only makes sense on repositories.

    • Fixed an issue on sandbox renaming, that would allow you to rename a sandbox and end up in a bad state.

    • When checking if the ViewAction.ChangeRepoConnections action is allowed (with e.g. SearchDomain.isActionAllowed), the answer will now be false if checked on a repository, as the action only makes sense on views.

    • Fixed an issue causing Humio running on Java 16+ to return incorrect search results when the input query contains Unicode surrogate pairs (e.g. when searching for an emoji).

    • Fixed a compatibility issue with Filebeat 7.16.0

    • Fixed a bug where invalid UTF-16 characters could not be ingested. They are now converted to ufffd.

    • Fixed an issue where series() failed to serialize its state properly.

    • When performing jobs triggered via the Redact Events API, Humio could restart queries for unrelated views until the delete job completed. This has been improved, so only views affected by the delete will be impacted.

    • Crash the node if any of a number of critical threads die. This should help prevent zombie nodes.

    • Temporary fix of issue with live queries not having first aggregator as bucket() or timeChart(), but then later in the query having those as a second aggregator. As a temporary fix, such queries will fail. In later releases, this will get fixed more properly.

    • Changes to the state of backend feature flags are now reflected in the audit log.

    • Fixed an issue where some regexes could not be used.

    • Fixed an issue in the interactive tutorial.

    • Support Java 17.

    • Fixed an issue where the SegmentMoverJob could delete the local copy of a segment, if a pending download of the segment failed the CRC check. The job will now keep the downloaded file at a temporary path until the CRC check completes, to avoid deleting a local copy created by other jobs, e.g. by bucket downloads.

    • Fixed an issue where a digest node could be unable to rejoin the cluster after being shut down if all other digest nodes were also down at the time.

    • Fixed a bug where query coordination partitions would not get updated.

    • Removed a spurious warning log when requesting a non-existent hash file from S3.

    • Include view+parser-name in thread dumps when time is spent inside a parser.

    • Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.

    • Fixed a bug where offsets from one Kafka partition could be used when deciding where to start consuming for another partition, in the case where there were too many datasources in the repo. This led to a crash loop when the affected node was restarted.

    • Fixed an issue where release notes would not close when a release is open.

Humio Server 1.34.2 LTS (2022-02-01)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.34.2LTS2022-02-01

Cloud

2022-12-31No1.26.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.34.2/server-1.34.2.tar.gz

These notes include entries from the following previous releases: 1.34.0, 1.34.1

Updated dependencies with security fixes and weakness.

New features and improvements

  • UI Changes

    • Added autofocus to the first field when opening a dialog using the save as functionality from the Search page.

    • Added buttons for stopping all queries, streaming queries, and historical queries from inside the query monitor.

    • Allow resize of columns in the event list by mouse.

    • Disable actions if permissions are handled externally.

    • Added maximum width to tabs on the Group page, so they do not keep expanding forever.

    • Validation error messages are now more precise and have improved formatting.

    • The overall look of message boxes in Humio has been updated.

    • Updated the links for Privacy Notice and Terms and Conditions.

    • Dark mode is officially deemed stable enough to be out of beta.

  • GraphQL API

    • The GraphQL field isEventForwardingEnabled on the HumioMetadata type is deprecated, as it is no longer in use internally. If you rely on this, please let us know.

    • Added three GraphQL mutations for stopping queries: stopAllQueries, stopStreamingQueries, and stopHistoricalQueries.

    • Added GraphQL mutation clearRecentQueries which a user can run to clear their recent queries in a specific view or repository.

    • Added 2-phase migration that will allow old user api tokens to be used and clean global from secrets after a 30 day period.

    • Changed old personal user token implementation to hash based.

    • Renamed the deleteEvents related GraphQL mutations and queries to redactEvents. The redactEvents API is intended for redacting sensitive data from a repository, not for bulk deletion of events. We think the new name invites fewer misunderstandings.

  • Configuration

    • When checking if the ViewAction.EventForwarding action is allowed (with e.g. SearchDomain.isActionAllowed), the answer will now be false if the event forwarding is not enabled on the server.

  • Functions

    • Improved performance of the query functions drop() and rename() by quite a bit.

    • Added query function math:arctan2()to the query language.

    • Added the communityId() function for calculating hashes of network flow tuples according to the (Community ID Spec).

    • The kvParse() query function can now parse unquoted empty values using the new parameter separatorPadding to specify if your data has whitespace around the key-value separator (typically =). The default is "Unknown", which will leave the functionality of the function unchanged.

    • Added a minSpan parameter to timeChart() and bucket(), which can be used to specify a minimum span when using a short time interval.

    • Refactored query functions join(), selfJoin(), and selfJoinFilter() into user-visible and internal implementations.

  • Other

    • It is now possible to create actions, alerts, scheduled searches, and parsers from YAML template files.

    • Added new metric: bucket-storage-upload-latency-max. It shows the amount of time spent for the event that that has been pending for upload to bucket storage the longest.

    • It is now possible to ingest logs into Humio using LogStash v.7.13 and upwards.

    • Added a precondition that ensures that the number of ingest partitions cannot be reduced.

    • Added validation and a more clear error message for queries with a time span of 0.

    • Added metric for the number of currently running streaming queries.

    • Improved shutdown logic slightly, helping prevent thread pools from getting stuck or logging spurious errors during shutdown.

    • Made the transfer coordinator display more clear errors instead of an internal server error for multinode clusters.

    • Added Australian states to the States dropdown.

    • New metric: ingest-request-delay. Histogram of ingest request time spent being delayed due to exceeding limit on concurrent processing of ingest (milliseconds).

    • Improved handling of multiple nodes attempting to create views with the same names at the same time, as might happen when bootstrapping a cluster.

    • Improved the error reporting when installing, updating or exporting a package fails.

    • Create, update, and delete of dashboards is now audit logged.

    • Reword regular expression related error messages.

    • Added management API to put hosts in maintenance mode.

    • Improved error messages when an invalid regular expression is used in replace.

    • Retention based on compressed size will no longer account for segment replication.

    • Query validation has been improved to include several errors which used to first show up after submitting search.

    • Prepopulate email field when invited user is filling in a form with this information.

    • Node roles can now be assigned/removed at runtime.

    • Improved partition layout auto-balancing algorithm.

    • Added support in the humio event collector for organization- and system-wide ingest tokens and the ability to use a parser from a different repo than the one being ingested into.

    • A compressed segment with a size of 1GB will now always count for retention as 1 GB. Previously, a compressed segment with a size of 1GB might count for more than 1GB when calculating retention, if that segment had more replicas than configured. The effect on the retention policy was that if you had configured retention of .0GB compressed bytes, Humio might retain less than .0GB of compressed data if any of those segments had too many replicas.

    • Added checksum verification within hash filter files on read.

    • Query editor: improved code completion of function names.

    • Minor optimization when using groupBy with a single field.

    • Added "export as yaml" function to the list pages of parsers, actions and scheduled searches.

    • Reduce limit on number of datasources for sandbox repositories created when a user is created to .0 by default.

Fixed in this release

  • Security

    • Updated dependencies to Jawn for CVE-2022-21653.

    • Updated dependencies to nanoid for CVE-2021-23566.

    • Updated dependencies to Netty to fix CVE-2021-43797

    • Updated dependencies to node-fetch for CVE-2022-0235.

    • Updated dependencies to Akka for CVE-2021-42697.

    • Updated dependencies to follow-redirects for CVE-2022-0155.

    • Updated dependencies to log4j 2.17.1 to fix CVE-2021-44832 and CVE-2021-45105

    • Updated dependencies to log4j 2.16 to remove of message lookups (CVE-2021-44228 and CVE-2021-45046)

  • Summary

    • Updated dependencies to Jackson to fix a weakness

    • Fixes an issue with epoch and offsets not always being stripped from segments.

    • Fixed an issue where queries of the form #someTagField != someValue ... would sometimes produce incorrect results.

    • Fixed an issue where live queries would sometimes double-count parts of the historic data.

  • Automation and Alerts

    • Alerts and scheduled searches are no longer run on cloud for organizations with an expired trial license, and on-prem for any expired license.

    • Fixed an issue where an alert would not be throttled until after its actions had completed, which could make the alert trigger multiple times shortly after each other if an action was slow. Now, the alert is throttled as soon as it triggers.

    • Alerts and scheduled searches are now enabled per default when created. The check disabling these entities if no actions are attached has been replaced with a warning, which informs a user that even though the entity is enabled, nothing will trigger since no actions are attached.

  • Functions

  • Other

    • Fixed an issue where the segment merger could mishandle errors during merge.

    • Fixed an issue on on-prem trial license that would use user count limits from cloud.

    • Use a fresh (random) name for the tmp folder below the datadir to ensure that it is a proper subdir of the datadir and not a mount point.

    • Fixed styling issue on the search page where long errors would overflow the screen.

    • Fixed a bug where only part of the Users page was loading when navigating from the All organizations page.

    • When an alert query encounters a warning and Humio is not configured to trigger alerts despite warnings ALERT_DESPITE_WARNINGS=true the warning text will now be shown as an error message on the alert in the UI.

    • Fixed an issue where certain problems would highlight the first word in a query.

    • Addressed an issue causing Humio to sometimes error log an ArrayIndexOutOfBoundsException during shutdown.

    • Fixed incorrect results when searching through saved queries and recent queries.

    • Fixed an issue where streaming (exporting) query results in JSON format could include extra "," characters within the output.

    • Fixed a bug where shared lookup files could not be downloaded from the UI.

    • Fixed a bug with the cache not being populated between restarts on single node clusters.

    • Fixed an issue when adding a group to a repository or view than an error message is displayed when the user is not the organization owner or root.

    • Prevent unauthorized analytics requests being sent.

    • Fixed an issue where error messages would show wrong input.

    • The field vhost in internal Humio logging is now reserved for denoting the host logging the message. Other uses of vhost now uses the field hostId.

    • Removed error query param from URL when entering Humio.

    • Fixed an issue that in rare cases would cause login errors.

    • No longer return the "Query Plan" in responses, but return a hash in the new field hashedQueryOnView instead. The plan could leak information not otherwise visible to the user, such as query prefixes being applied.

    • Fixed some widgets on dashboards reporting errors while waiting for data to load.

    • Fixed an issue where the web client could start queries from the beginning of time when scrolling backwards through events in the UI.

    • Changes to the state of IOC access on organizations are now reflected in the audit log.

    • Fixed an issue where a scheduled search failed and was retried, if it had multiple actions and at least one action was unknown to Humio. Now, the unknown action is logged, but the scheduled search completes successfully and continues to the next scheduled run.

    • When a digester fails to start, rather than restarting the JVM as previous releases did, keep retrying to start assuming that the issue is transient, such as data for a single ingest partition being unavailable for a short while. While in this situation the process reports the metric for ingest latency on the affected partitions as being uptime of the JVM process at this point. The idea is to signal that data is not flowing on those partitions, so that a monitored metric can raise an alarm somewhere. In lack of a proper latency in this situation, it's better to have a growing non-zero metrics than having the metrics being zero.

    • Fixed an issue where missing undersized segments in a datasource might cause Humio to repeatedly transfer undersized segments between nodes.

    • When creating or updating an action, the backend now verifies that the host url associated with the action is prefixed with either http:// or https://. This affects Actions of the type: Webhook, OpsGenie, Single-Channel Slack and VictorOps.

    • Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.

    • Fixed an edge case where Humio might create multiple copies of the same datasource when the number of Kafka partitions is changed. The fix ensures only one copy will be created.

    • Changed field type for zip codes.

    • Fixed a number of stability issues with the event redaction job.

    • Fixed an issue where the segment merger would write that the current node had a segment slightly before registering that segment in the local node.

    • Fixed an issue where clicking on the counters of parsed events on the Parsers page would open an empty search page, except for built-in parsers. Now, it correctly shows the latest parsed events for all parsers (except package parsers).

    • Changed default package type to "application" on the export package wizard.

    • Fixed an issue where sort() would cause events to be read in a non-optimal order for the entire query.

    • Fixed an issue where a dashboard installed with a YAML file could be slightly different than what was specified in the file.

    • Fixed an issue where a failing event forwarder would be cached indefinitely and could negatively impact Humio performance.

    • Fixed an issue where comments spanning multiple lines wouldn't be colored correctly.

    • Browser storage is now cleared when initiating while unauthenticated.

    • Fixed an issue where OIDC without a discovery endpoint would fail to configure if OIDC_TOKEN_ENDPOINT_AUTH_METHOD was not set.

    • Remove the ability to create ingest tokens and ingest listeners on system repositories.

    • When checking if the ViewAction.ChangeS3ArchivingSettings action is allowed (with e.g. SearchDomain.isActionAllowed), the answer will now be false if checked on a view, as the action only makes sense on repositories.

    • Fixed an issue on sandbox renaming, that would allow you to rename a sandbox and end up in a bad state.

    • When checking if the ViewAction.ChangeRepoConnections action is allowed (with e.g. SearchDomain.isActionAllowed), the answer will now be false if checked on a repository, as the action only makes sense on views.

    • Fixed an issue causing Humio running on Java 16+ to return incorrect search results when the input query contains Unicode surrogate pairs (e.g. when searching for an emoji).

    • Fixed a compatibility issue with Filebeat 7.16.0

    • Fixed a bug where invalid UTF-16 characters could not be ingested. They are now converted to ufffd.

    • Fixed an issue where series() failed to serialize its state properly.

    • When performing jobs triggered via the Redact Events API, Humio could restart queries for unrelated views until the delete job completed. This has been improved, so only views affected by the delete will be impacted.

    • Crash the node if any of a number of critical threads die. This should help prevent zombie nodes.

    • Temporary fix of issue with live queries not having first aggregator as bucket() or timeChart(), but then later in the query having those as a second aggregator. As a temporary fix, such queries will fail. In later releases, this will get fixed more properly.

    • Changes to the state of backend feature flags are now reflected in the audit log.

    • Fixed an issue where some regexes could not be used.

    • Fixed an issue in the interactive tutorial.

    • Support Java 17.

    • Fixed an issue where the SegmentMoverJob could delete the local copy of a segment, if a pending download of the segment failed the CRC check. The job will now keep the downloaded file at a temporary path until the CRC check completes, to avoid deleting a local copy created by other jobs, e.g. by bucket downloads.

    • Fixed an issue where a digest node could be unable to rejoin the cluster after being shut down if all other digest nodes were also down at the time.

    • Fixed a bug where query coordination partitions would not get updated.

    • Removed a spurious warning log when requesting a non-existent hash file from S3.

    • Include view+parser-name in thread dumps when time is spent inside a parser.

    • Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.

    • Fixed a bug where offsets from one Kafka partition could be used when deciding where to start consuming for another partition, in the case where there were too many datasources in the repo. This led to a crash loop when the affected node was restarted.

    • Fixed an issue where release notes would not close when a release is open.

Humio Server 1.34.1 LTS (2022-01-06)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.34.1LTS2022-01-06

Cloud

2022-12-31No1.26.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.34.1/server-1.34.1.tar.gz

These notes include entries from the following previous releases: 1.34.0

Updated dependencies with security fixes and weakness.

New features and improvements

  • UI Changes

    • Added autofocus to the first field when opening a dialog using the save as functionality from the Search page.

    • Added buttons for stopping all queries, streaming queries, and historical queries from inside the query monitor.

    • Allow resize of columns in the event list by mouse.

    • Disable actions if permissions are handled externally.

    • Added maximum width to tabs on the Group page, so they do not keep expanding forever.

    • Validation error messages are now more precise and have improved formatting.

    • The overall look of message boxes in Humio has been updated.

    • Updated the links for Privacy Notice and Terms and Conditions.

    • Dark mode is officially deemed stable enough to be out of beta.

  • GraphQL API

    • The GraphQL field isEventForwardingEnabled on the HumioMetadata type is deprecated, as it is no longer in use internally. If you rely on this, please let us know.

    • Added three GraphQL mutations for stopping queries: stopAllQueries, stopStreamingQueries, and stopHistoricalQueries.

    • Added GraphQL mutation clearRecentQueries which a user can run to clear their recent queries in a specific view or repository.

    • Added 2-phase migration that will allow old user api tokens to be used and clean global from secrets after a 30 day period.

    • Changed old personal user token implementation to hash based.

    • Renamed the deleteEvents related GraphQL mutations and queries to redactEvents. The redactEvents API is intended for redacting sensitive data from a repository, not for bulk deletion of events. We think the new name invites fewer misunderstandings.

  • Configuration

    • When checking if the ViewAction.EventForwarding action is allowed (with e.g. SearchDomain.isActionAllowed), the answer will now be false if the event forwarding is not enabled on the server.

  • Functions

    • Improved performance of the query functions drop() and rename() by quite a bit.

    • Added query function math:arctan2()to the query language.

    • Added the communityId() function for calculating hashes of network flow tuples according to the (Community ID Spec).

    • The kvParse() query function can now parse unquoted empty values using the new parameter separatorPadding to specify if your data has whitespace around the key-value separator (typically =). The default is "Unknown", which will leave the functionality of the function unchanged.

    • Added a minSpan parameter to timeChart() and bucket(), which can be used to specify a minimum span when using a short time interval.

    • Refactored query functions join(), selfJoin(), and selfJoinFilter() into user-visible and internal implementations.

  • Other

    • It is now possible to create actions, alerts, scheduled searches, and parsers from YAML template files.

    • Added new metric: bucket-storage-upload-latency-max. It shows the amount of time spent for the event that that has been pending for upload to bucket storage the longest.

    • It is now possible to ingest logs into Humio using LogStash v.7.13 and upwards.

    • Added a precondition that ensures that the number of ingest partitions cannot be reduced.

    • Added validation and a more clear error message for queries with a time span of 0.

    • Added metric for the number of currently running streaming queries.

    • Improved shutdown logic slightly, helping prevent thread pools from getting stuck or logging spurious errors during shutdown.

    • Made the transfer coordinator display more clear errors instead of an internal server error for multinode clusters.

    • Added Australian states to the States dropdown.

    • New metric: ingest-request-delay. Histogram of ingest request time spent being delayed due to exceeding limit on concurrent processing of ingest (milliseconds).

    • Improved handling of multiple nodes attempting to create views with the same names at the same time, as might happen when bootstrapping a cluster.

    • Improved the error reporting when installing, updating or exporting a package fails.

    • Create, update, and delete of dashboards is now audit logged.

    • Reword regular expression related error messages.

    • Added management API to put hosts in maintenance mode.

    • Improved error messages when an invalid regular expression is used in replace.

    • Retention based on compressed size will no longer account for segment replication.

    • Query validation has been improved to include several errors which used to first show up after submitting search.

    • Prepopulate email field when invited user is filling in a form with this information.

    • Node roles can now be assigned/removed at runtime.

    • Improved partition layout auto-balancing algorithm.

    • Added support in the humio event collector for organization- and system-wide ingest tokens and the ability to use a parser from a different repo than the one being ingested into.

    • A compressed segment with a size of 1GB will now always count for retention as 1 GB. Previously, a compressed segment with a size of 1GB might count for more than 1GB when calculating retention, if that segment had more replicas than configured. The effect on the retention policy was that if you had configured retention of .0GB compressed bytes, Humio might retain less than .0GB of compressed data if any of those segments had too many replicas.

    • Added checksum verification within hash filter files on read.

    • Query editor: improved code completion of function names.

    • Minor optimization when using groupBy with a single field.

    • Added "export as yaml" function to the list pages of parsers, actions and scheduled searches.

    • Reduce limit on number of datasources for sandbox repositories created when a user is created to .0 by default.

Fixed in this release

  • Security

    • Updated dependencies to Netty to fix CVE-2021-43797

    • Updated dependencies to log4j 2.17.1 to fix CVE-2021-44832 and CVE-2021-45105

    • Updated dependencies to log4j 2.16 to remove of message lookups (CVE-2021-44228 and CVE-2021-45046)

  • Summary

    • Updated dependencies to Jackson to fix a weakness

  • Automation and Alerts

    • Alerts and scheduled searches are no longer run on cloud for organizations with an expired trial license, and on-prem for any expired license.

    • Fixed an issue where an alert would not be throttled until after its actions had completed, which could make the alert trigger multiple times shortly after each other if an action was slow. Now, the alert is throttled as soon as it triggers.

    • Alerts and scheduled searches are now enabled per default when created. The check disabling these entities if no actions are attached has been replaced with a warning, which informs a user that even though the entity is enabled, nothing will trigger since no actions are attached.

  • Functions

  • Other

    • Fixed an issue where the segment merger could mishandle errors during merge.

    • Fixed an issue on on-prem trial license that would use user count limits from cloud.

    • Use a fresh (random) name for the tmp folder below the datadir to ensure that it is a proper subdir of the datadir and not a mount point.

    • Fixed styling issue on the search page where long errors would overflow the screen.

    • Fixed a bug where only part of the Users page was loading when navigating from the All organizations page.

    • When an alert query encounters a warning and Humio is not configured to trigger alerts despite warnings ALERT_DESPITE_WARNINGS=true the warning text will now be shown as an error message on the alert in the UI.

    • Fixed an issue where certain problems would highlight the first word in a query.

    • Addressed an issue causing Humio to sometimes error log an ArrayIndexOutOfBoundsException during shutdown.

    • Fixed incorrect results when searching through saved queries and recent queries.

    • Fixed an issue where streaming (exporting) query results in JSON format could include extra "," characters within the output.

    • Fixed a bug where shared lookup files could not be downloaded from the UI.

    • Fixed a bug with the cache not being populated between restarts on single node clusters.

    • Fixed an issue when adding a group to a repository or view than an error message is displayed when the user is not the organization owner or root.

    • Prevent unauthorized analytics requests being sent.

    • Fixed an issue where error messages would show wrong input.

    • The field vhost in internal Humio logging is now reserved for denoting the host logging the message. Other uses of vhost now uses the field hostId.

    • Removed error query param from URL when entering Humio.

    • Fixed an issue that in rare cases would cause login errors.

    • No longer return the "Query Plan" in responses, but return a hash in the new field hashedQueryOnView instead. The plan could leak information not otherwise visible to the user, such as query prefixes being applied.

    • Fixed some widgets on dashboards reporting errors while waiting for data to load.

    • Fixed an issue where the web client could start queries from the beginning of time when scrolling backwards through events in the UI.

    • Changes to the state of IOC access on organizations are now reflected in the audit log.

    • Fixed an issue where a scheduled search failed and was retried, if it had multiple actions and at least one action was unknown to Humio. Now, the unknown action is logged, but the scheduled search completes successfully and continues to the next scheduled run.

    • When a digester fails to start, rather than restarting the JVM as previous releases did, keep retrying to start assuming that the issue is transient, such as data for a single ingest partition being unavailable for a short while. While in this situation the process reports the metric for ingest latency on the affected partitions as being uptime of the JVM process at this point. The idea is to signal that data is not flowing on those partitions, so that a monitored metric can raise an alarm somewhere. In lack of a proper latency in this situation, it's better to have a growing non-zero metrics than having the metrics being zero.

    • Fixed an issue where missing undersized segments in a datasource might cause Humio to repeatedly transfer undersized segments between nodes.

    • When creating or updating an action, the backend now verifies that the host url associated with the action is prefixed with either http:// or https://. This affects Actions of the type: Webhook, OpsGenie, Single-Channel Slack and VictorOps.

    • Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.

    • Fixed an edge case where Humio might create multiple copies of the same datasource when the number of Kafka partitions is changed. The fix ensures only one copy will be created.

    • Changed field type for zip codes.

    • Fixed a number of stability issues with the event redaction job.

    • Fixed an issue where the segment merger would write that the current node had a segment slightly before registering that segment in the local node.

    • Fixed an issue where clicking on the counters of parsed events on the Parsers page would open an empty search page, except for built-in parsers. Now, it correctly shows the latest parsed events for all parsers (except package parsers).

    • Changed default package type to "application" on the export package wizard.

    • Fixed an issue where sort() would cause events to be read in a non-optimal order for the entire query.

    • Fixed an issue where a dashboard installed with a YAML file could be slightly different than what was specified in the file.

    • Fixed an issue where a failing event forwarder would be cached indefinitely and could negatively impact Humio performance.

    • Fixed an issue where comments spanning multiple lines wouldn't be colored correctly.

    • Browser storage is now cleared when initiating while unauthenticated.

    • Fixed an issue where OIDC without a discovery endpoint would fail to configure if OIDC_TOKEN_ENDPOINT_AUTH_METHOD was not set.

    • Remove the ability to create ingest tokens and ingest listeners on system repositories.

    • When checking if the ViewAction.ChangeS3ArchivingSettings action is allowed (with e.g. SearchDomain.isActionAllowed), the answer will now be false if checked on a view, as the action only makes sense on repositories.

    • Fixed an issue on sandbox renaming, that would allow you to rename a sandbox and end up in a bad state.

    • When checking if the ViewAction.ChangeRepoConnections action is allowed (with e.g. SearchDomain.isActionAllowed), the answer will now be false if checked on a repository, as the action only makes sense on views.

    • Fixed an issue causing Humio running on Java 16+ to return incorrect search results when the input query contains Unicode surrogate pairs (e.g. when searching for an emoji).

    • Fixed a compatibility issue with Filebeat 7.16.0

    • Fixed a bug where invalid UTF-16 characters could not be ingested. They are now converted to ufffd.

    • Fixed an issue where series() failed to serialize its state properly.

    • When performing jobs triggered via the Redact Events API, Humio could restart queries for unrelated views until the delete job completed. This has been improved, so only views affected by the delete will be impacted.

    • Crash the node if any of a number of critical threads die. This should help prevent zombie nodes.

    • Temporary fix of issue with live queries not having first aggregator as bucket() or timeChart(), but then later in the query having those as a second aggregator. As a temporary fix, such queries will fail. In later releases, this will get fixed more properly.

    • Changes to the state of backend feature flags are now reflected in the audit log.

    • Fixed an issue where some regexes could not be used.

    • Fixed an issue in the interactive tutorial.

    • Support Java 17.

    • Fixed an issue where the SegmentMoverJob could delete the local copy of a segment, if a pending download of the segment failed the CRC check. The job will now keep the downloaded file at a temporary path until the CRC check completes, to avoid deleting a local copy created by other jobs, e.g. by bucket downloads.

    • Fixed an issue where a digest node could be unable to rejoin the cluster after being shut down if all other digest nodes were also down at the time.

    • Fixed a bug where query coordination partitions would not get updated.

    • Removed a spurious warning log when requesting a non-existent hash file from S3.

    • Include view+parser-name in thread dumps when time is spent inside a parser.

    • Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.

    • Fixed a bug where offsets from one Kafka partition could be used when deciding where to start consuming for another partition, in the case where there were too many datasources in the repo. This led to a crash loop when the affected node was restarted.

    • Fixed an issue where release notes would not close when a release is open.

Humio Server 1.34.0 LTS (2021-12-15)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.34.0LTS2021-12-15

Cloud

2022-12-31No1.26.0Yes

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.34.0/server-1.34.0.tar.gz

Humio Server 1.34 REQUIRES minimum previous version 1.26.0 of Humio to start. Clusters wishing to upgrade from older versions must upgrade to 1.26.0+ first. After running 1.34.0 or later, you cannot downgrade to versions prior to 1.26.0.

You can now use the mouse to resize columns in the event list. Previously you had to click the column header and use the Increase / Decrease Width buttons.

New features and improvements

  • UI Changes

    • Added autofocus to the first field when opening a dialog using the save as functionality from the Search page.

    • Added buttons for stopping all queries, streaming queries, and historical queries from inside the query monitor.

    • Allow resize of columns in the event list by mouse.

    • Disable actions if permissions are handled externally.

    • Added maximum width to tabs on the Group page, so they do not keep expanding forever.

    • Validation error messages are now more precise and have improved formatting.

    • The overall look of message boxes in Humio has been updated.

    • Updated the links for Privacy Notice and Terms and Conditions.

    • Dark mode is officially deemed stable enough to be out of beta.

  • GraphQL API

    • The GraphQL field isEventForwardingEnabled on the HumioMetadata type is deprecated, as it is no longer in use internally. If you rely on this, please let us know.

    • Added three GraphQL mutations for stopping queries: stopAllQueries, stopStreamingQueries, and stopHistoricalQueries.

    • Added GraphQL mutation clearRecentQueries which a user can run to clear their recent queries in a specific view or repository.

    • Added 2-phase migration that will allow old user api tokens to be used and clean global from secrets after a 30 day period.

    • Changed old personal user token implementation to hash based.

    • Renamed the deleteEvents related GraphQL mutations and queries to redactEvents. The redactEvents API is intended for redacting sensitive data from a repository, not for bulk deletion of events. We think the new name invites fewer misunderstandings.

  • Configuration

    • When checking if the ViewAction.EventForwarding action is allowed (with e.g. SearchDomain.isActionAllowed), the answer will now be false if the event forwarding is not enabled on the server.

  • Functions

    • Improved performance of the query functions drop() and rename() by quite a bit.

    • Added query function math:arctan2()to the query language.

    • Added the communityId() function for calculating hashes of network flow tuples according to the (Community ID Spec).

    • The kvParse() query function can now parse unquoted empty values using the new parameter separatorPadding to specify if your data has whitespace around the key-value separator (typically =). The default is "Unknown", which will leave the functionality of the function unchanged.

    • Added a minSpan parameter to timeChart() and bucket(), which can be used to specify a minimum span when using a short time interval.

    • Refactored query functions join(), selfJoin(), and selfJoinFilter() into user-visible and internal implementations.

  • Other

    • It is now possible to create actions, alerts, scheduled searches, and parsers from YAML template files.

    • Added new metric: bucket-storage-upload-latency-max. It shows the amount of time spent for the event that that has been pending for upload to bucket storage the longest.

    • It is now possible to ingest logs into Humio using LogStash v.7.13 and upwards.

    • Added a precondition that ensures that the number of ingest partitions cannot be reduced.

    • Added validation and a more clear error message for queries with a time span of 0.

    • Added metric for the number of currently running streaming queries.

    • Improved shutdown logic slightly, helping prevent thread pools from getting stuck or logging spurious errors during shutdown.

    • Made the transfer coordinator display more clear errors instead of an internal server error for multinode clusters.

    • Added Australian states to the States dropdown.

    • New metric: ingest-request-delay. Histogram of ingest request time spent being delayed due to exceeding limit on concurrent processing of ingest (milliseconds).

    • Improved handling of multiple nodes attempting to create views with the same names at the same time, as might happen when bootstrapping a cluster.

    • Improved the error reporting when installing, updating or exporting a package fails.

    • Create, update, and delete of dashboards is now audit logged.

    • Reword regular expression related error messages.

    • Added management API to put hosts in maintenance mode.

    • Improved error messages when an invalid regular expression is used in replace.

    • Retention based on compressed size will no longer account for segment replication.

    • Query validation has been improved to include several errors which used to first show up after submitting search.

    • Prepopulate email field when invited user is filling in a form with this information.

    • Node roles can now be assigned/removed at runtime.

    • Improved partition layout auto-balancing algorithm.

    • Added support in the humio event collector for organization- and system-wide ingest tokens and the ability to use a parser from a different repo than the one being ingested into.

    • A compressed segment with a size of 1GB will now always count for retention as 1 GB. Previously, a compressed segment with a size of 1GB might count for more than 1GB when calculating retention, if that segment had more replicas than configured. The effect on the retention policy was that if you had configured retention of .0GB compressed bytes, Humio might retain less than .0GB of compressed data if any of those segments had too many replicas.

    • Added checksum verification within hash filter files on read.

    • Query editor: improved code completion of function names.

    • Minor optimization when using groupBy with a single field.

    • Added "export as yaml" function to the list pages of parsers, actions and scheduled searches.

    • Reduce limit on number of datasources for sandbox repositories created when a user is created to .0 by default.

Fixed in this release

  • Security

    • Updated dependencies to log4j 2.16 to remove of message lookups (CVE-2021-44228 and CVE-2021-45046)

  • Automation and Alerts

    • Alerts and scheduled searches are no longer run on cloud for organizations with an expired trial license, and on-prem for any expired license.

    • Fixed an issue where an alert would not be throttled until after its actions had completed, which could make the alert trigger multiple times shortly after each other if an action was slow. Now, the alert is throttled as soon as it triggers.

    • Alerts and scheduled searches are now enabled per default when created. The check disabling these entities if no actions are attached has been replaced with a warning, which informs a user that even though the entity is enabled, nothing will trigger since no actions are attached.

  • Functions

  • Other

    • Fixed an issue where the segment merger could mishandle errors during merge.

    • Fixed an issue on on-prem trial license that would use user count limits from cloud.

    • Use a fresh (random) name for the tmp folder below the datadir to ensure that it is a proper subdir of the datadir and not a mount point.

    • Fixed styling issue on the search page where long errors would overflow the screen.

    • Fixed a bug where only part of the Users page was loading when navigating from the All organizations page.

    • When an alert query encounters a warning and Humio is not configured to trigger alerts despite warnings ALERT_DESPITE_WARNINGS=true the warning text will now be shown as an error message on the alert in the UI.

    • Fixed an issue where certain problems would highlight the first word in a query.

    • Addressed an issue causing Humio to sometimes error log an ArrayIndexOutOfBoundsException during shutdown.

    • Fixed incorrect results when searching through saved queries and recent queries.

    • Fixed an issue where streaming (exporting) query results in JSON format could include extra "," characters within the output.

    • Fixed a bug where shared lookup files could not be downloaded from the UI.

    • Fixed a bug with the cache not being populated between restarts on single node clusters.

    • Fixed an issue when adding a group to a repository or view than an error message is displayed when the user is not the organization owner or root.

    • Prevent unauthorized analytics requests being sent.

    • Fixed an issue where error messages would show wrong input.

    • The field vhost in internal Humio logging is now reserved for denoting the host logging the message. Other uses of vhost now uses the field hostId.

    • Removed error query param from URL when entering Humio.

    • Fixed an issue that in rare cases would cause login errors.

    • No longer return the "Query Plan" in responses, but return a hash in the new field hashedQueryOnView instead. The plan could leak information not otherwise visible to the user, such as query prefixes being applied.

    • Fixed some widgets on dashboards reporting errors while waiting for data to load.

    • Fixed an issue where the web client could start queries from the beginning of time when scrolling backwards through events in the UI.

    • Changes to the state of IOC access on organizations are now reflected in the audit log.

    • Fixed an issue where a scheduled search failed and was retried, if it had multiple actions and at least one action was unknown to Humio. Now, the unknown action is logged, but the scheduled search completes successfully and continues to the next scheduled run.

    • When a digester fails to start, rather than restarting the JVM as previous releases did, keep retrying to start assuming that the issue is transient, such as data for a single ingest partition being unavailable for a short while. While in this situation the process reports the metric for ingest latency on the affected partitions as being uptime of the JVM process at this point. The idea is to signal that data is not flowing on those partitions, so that a monitored metric can raise an alarm somewhere. In lack of a proper latency in this situation, it's better to have a growing non-zero metrics than having the metrics being zero.

    • Fixed an issue where missing undersized segments in a datasource might cause Humio to repeatedly transfer undersized segments between nodes.

    • When creating or updating an action, the backend now verifies that the host url associated with the action is prefixed with either http:// or https://. This affects Actions of the type: Webhook, OpsGenie, Single-Channel Slack and VictorOps.

    • Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.

    • Fixed an edge case where Humio might create multiple copies of the same datasource when the number of Kafka partitions is changed. The fix ensures only one copy will be created.

    • Changed field type for zip codes.

    • Fixed a number of stability issues with the event redaction job.

    • Fixed an issue where the segment merger would write that the current node had a segment slightly before registering that segment in the local node.

    • Fixed an issue where clicking on the counters of parsed events on the Parsers page would open an empty search page, except for built-in parsers. Now, it correctly shows the latest parsed events for all parsers (except package parsers).

    • Changed default package type to "application" on the export package wizard.

    • Fixed an issue where sort() would cause events to be read in a non-optimal order for the entire query.

    • Fixed an issue where a dashboard installed with a YAML file could be slightly different than what was specified in the file.

    • Fixed an issue where a failing event forwarder would be cached indefinitely and could negatively impact Humio performance.

    • Fixed an issue where comments spanning multiple lines wouldn't be colored correctly.

    • Browser storage is now cleared when initiating while unauthenticated.

    • Fixed an issue where OIDC without a discovery endpoint would fail to configure if OIDC_TOKEN_ENDPOINT_AUTH_METHOD was not set.

    • Remove the ability to create ingest tokens and ingest listeners on system repositories.

    • When checking if the ViewAction.ChangeS3ArchivingSettings action is allowed (with e.g. SearchDomain.isActionAllowed), the answer will now be false if checked on a view, as the action only makes sense on repositories.

    • Fixed an issue on sandbox renaming, that would allow you to rename a sandbox and end up in a bad state.

    • When checking if the ViewAction.ChangeRepoConnections action is allowed (with e.g. SearchDomain.isActionAllowed), the answer will now be false if checked on a repository, as the action only makes sense on views.

    • Fixed an issue causing Humio running on Java 16+ to return incorrect search results when the input query contains Unicode surrogate pairs (e.g. when searching for an emoji).

    • Fixed a compatibility issue with Filebeat 7.16.0

    • Fixed a bug where invalid UTF-16 characters could not be ingested. They are now converted to ufffd.

    • Fixed an issue where series() failed to serialize its state properly.

    • When performing jobs triggered via the Redact Events API, Humio could restart queries for unrelated views until the delete job completed. This has been improved, so only views affected by the delete will be impacted.

    • Crash the node if any of a number of critical threads die. This should help prevent zombie nodes.

    • Temporary fix of issue with live queries not having first aggregator as bucket() or timeChart(), but then later in the query having those as a second aggregator. As a temporary fix, such queries will fail. In later releases, this will get fixed more properly.

    • Changes to the state of backend feature flags are now reflected in the audit log.

    • Fixed an issue where some regexes could not be used.

    • Fixed an issue in the interactive tutorial.

    • Support Java 17.

    • Fixed an issue where the SegmentMoverJob could delete the local copy of a segment, if a pending download of the segment failed the CRC check. The job will now keep the downloaded file at a temporary path until the CRC check completes, to avoid deleting a local copy created by other jobs, e.g. by bucket downloads.

    • Fixed an issue where a digest node could be unable to rejoin the cluster after being shut down if all other digest nodes were also down at the time.

    • Fixed a bug where query coordination partitions would not get updated.

    • Removed a spurious warning log when requesting a non-existent hash file from S3.

    • Include view+parser-name in thread dumps when time is spent inside a parser.

    • Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.

    • Fixed a bug where offsets from one Kafka partition could be used when deciding where to start consuming for another partition, in the case where there were too many datasources in the repo. This led to a crash loop when the affected node was restarted.

    • Fixed an issue where release notes would not close when a release is open.

Humio Server 1.33.3 GA (2021-12-10)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.33.3GA2021-12-10

Cloud

2022-12-31No1.26.0No

Available for download two days after release.

Hide file hashes

Show file hashes

More security fixes related to log4j logging.

Fixed in this release

  • Security

    • Updated dependencies to address a critical security vulnerability for the log4j logging framework, "log4shell", (CVE-2021-44228).

Humio Server 1.33.2 GA (2021-12-10)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.33.2GA2021-12-10

Cloud

2022-12-31No1.26.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Security fix related to log4j logging, and fix compatibility with Filebeat.

Fixed in this release

  • Security

    • Updated dependencies to address a critical security vulnerability for the log4j logging framework, "log4shell", (CVE-2021-44228).

  • Summary

    • Fixed a compatibility issue with Filebeat 7.16.0

Humio Server 1.33.1 GA (2021-11-23)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.33.1GA2021-11-23

Cloud

2022-12-31No1.26.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Critical bug fixes related to version dependencies, alert throttling, etc.; Improve Interactive Tutorial.

Fixed in this release

  • Summary

    • Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.

    • Updated a dependency to a version fixing a critical bug.

    • Fixed an issue that in rare cases would cause login errors.

    • Fixed an issue in the interactive tutorial.

  • Automation and Alerts

    • Reverted from 1.33.0 Errors on alerts, which are shown in the alert overview, are now only cleared when either the alert query is updated by a user or the alert triggers. Previously, errors that occurred when actions were triggered would be removed when the alert no longer triggered. Now, they will be displayed until the actions trigger successfully. Conversely, errors that occur when running the query may now remain until the next time the alert triggers, where they would previously be removed as soon as the query run again without errors.

      See original release note entry in 1.33.0

    • Fixed an issue where an alert would not be throttled until after its actions had completed, which could make the alert trigger multiple times shortly after each other if an action was slow. Now, the alert is throttled as soon as it triggers.

Humio Server 1.33.0 GA (2021-11-15)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.33.0GA2021-11-15

Cloud

2022-12-31No1.26.0Yes

Available for download two days after release.

Hide file hashes

Show file hashes

1.33 REQUIRES minimum version 1.26.0 of Humio to start. Clusters wishing to upgrade from older versions must upgrade to 1.26.0+ first. After running 1.33.0 or later, you cannot run versions prior to 1.26.0.

Once the release has been deployed, all existing personal api tokens will be hashed so they still can be used, but you will not be able to retrieve them again. If you want to preserve the tokens, be sure to copy it into a secrets vault before the release is deployed. The api-token field on the User type in GraphQL has been removed.

You can now use the mouse to resize columns in the event list. Previously you had to click the column header and use the "Increase / Decrease Width" buttons.

New features and improvements

  • UI Changes

    • Validation error messages are now more precise and have improved formatting.

    • Updated the links for Privacy Notice and Terms and Conditions.

    • Added buttons for stopping all queries, streaming queries, and historical queries from inside the query monitor.

    • The overall look of message boxes in Humio has been updated.

    • Added maximum width to tabs on the Group page, so they do not keep expanding forever.

    • Allow resize of columns in the event list by mouse.

    • Disable actions if permissions are handled externally.

    • Dark mode is officially deemed stable enough to be out of beta.

    • Added autofocus to the first field when opening a dialog using the save as functionality from the Search page.

  • GraphQL API

    • Added GraphQL mutation clearRecentQueries which a user can run to clear their recent queries in a specific view or repository.

    • Renamed the deleteEvents related GraphQL mutations and queries to redactEvents. The redactEvents API is intended for redacting sensitive data from a repository, not for bulk deletion of events. We think the new name invites fewer misunderstandings.

    • The GraphQL field isEventForwardingEnabled on the HumioMetadata type is deprecated, as it is no longer in use internally. If you rely on this, please let us know.

    • Added 2-phase migration that will allow old user api tokens to be used and clean global from secrets after a 30 day period.

    • Changed old personal user token implementation to hash based.

    • Added three GraphQL mutations for stopping queries: stopAllQueries, stopStreamingQueries, and stopHistoricalQueries.

  • Configuration

    • When checking if the ViewAction.EventForwarding action is allowed (with e.g. SearchDomain.isActionAllowed), the answer will now be false if the event forwarding is not enabled on the server.

  • Functions

    • Added query function math:arctan2() to the query language.

    • Added a minSpan parameter to timeChart() and bucket(), which can be used to specify a minimum span when using a short time interval.

    • The kvParse() query function can now parse unquoted empty values using the new parameter separatorPadding to specify if your data has whitespace around the key-value separator (typically =). The default is "Unknown", which will leave the functionality of the function unchanged.

    • Improved performance of the query functions drop() and rename() by quite a bit.

    • Added the communityId() function for calculating hashes of network flow tuples according to the Community ID Spec.

    • Refactored query functions join(), selfJoin(), and selfJoinFilter() into user-visible and internal implementations.

  • Other

    • Minor optimization when using groupBy with a single field.

    • Added checksum verification within hash filter files on read.

    • Query editor: improved code completion of function names.

    • Added management API to put hosts in maintenance mode.

    • Create, update, and delete of dashboards is now audit logged.

    • Node roles can now be assigned/removed at runtime.

    • Retention based on compressed size will no longer account for segment replication.

    • Improved handling of multiple nodes attempting to create views with the same names at the same time, as might happen when bootstrapping a cluster.

    • Added validation and a more clear error message for queries with a time span of 0.

    • A compressed segment with a size of 1GB will now always count for retention as 1 GB. Previously, a compressed segment with a size of 1GB might count for more than 1GB when calculating retention, if that segment had more replicas than configured. The effect on the retention policy was that if you had configured retention of .0GB compressed bytes, Humio might retain less than .0GB of compressed data if any of those segments had too many replicas.

    • Added support in the humio event collector for organization- and system-wide ingest tokens and the ability to use a parser from a different repo than the one being ingested into.

    • Added new metric: bucket-storage-upload-latency-max. It shows the amount of time spent for the event that that has been pending for upload to bucket storage the longest.

    • Query validation has been improved to include several errors which used to first show up after submitting search.

    • Improved shutdown logic slightly, helping prevent thread pools from getting stuck or logging spurious errors during shutdown.

    • Improved partition layout auto-balancing algorithm.

    • Prepopulate email field when invited user is filling in a form with this information.

    • Added "export as yaml" function to the list pages of parsers, actions and scheduled searches.

    • Reword regular expression related error messages.

    • It is now possible to ingest logs into Humio using LogStash v.7.13 and upwards.

    • Made the transfer coordinator display more clear errors instead of an internal server error for multinode clusters.

    • Added metric for the number of currently running streaming queries.

    • Reduce limit on number of datasources for sandbox repositories created when a user is created to .0 by default.

    • It is now possible to create actions, alerts, scheduled searches, and parsers from YAML template files.

    • Improved the error reporting when installing, updating or exporting a package fails.

    • New metric: ingest-request-delay. Histogram of ingest request time spent being delayed due to exceeding limit on concurrent processing of ingest (milliseconds).

    • Improved error messages when an invalid regular expression is used in replace.

    • Added Australian states to the States dropdown.

    • Added a precondition that ensures that the number of ingest partitions cannot be reduced.

Fixed in this release

  • UI Changes

    • When an alert query encounters a warning and Humio is not configured to trigger alerts despite warnings ALERT_DESPITE_WARNINGS the warning text will now be shown as an error message on the alert in the UI.

  • Automation and Alerts

    • Alerts and scheduled searches are now enabled per default when created. The check disabling these entities if no actions are attached has been replaced with a warning, which informs a user that even though the entity is enabled, nothing will trigger since no actions are attached.

    • Alerts and scheduled searches are no longer run on cloud for organizations with an expired trial license, and on-prem for any expired license.

  • Functions

    • Fixed an issue where sort() would cause events to be read ina non-optimal order for the entire query.

    • Fixed an issue where series() failed to serialize its state properly.

    • Fixed a bug in the validation of the bits parameter of hashMatch() and hashRewrite().

  • Other

    • Use a fresh (random) name for the tmp folder below the datadir to ensure that it is a proper subdir of the datadir and not a mount point.

    • The field vhost in internal Humio logging is now reserved for denoting the host logging the message. Other uses of vhost now uses the field hostId.

    • Support Java 17.

    • When performing jobs triggered via the Redact Events API, Humio could restart queries for unrelated views until the delete job completed. This has been improved, so only views affected by the delete will be impacted.

    • Fixed an issue where clicking on the counters of parsed events on the Parsers page would open an empty search page, except for built-in parsers. Now, it correctly shows the latest parsed events for all parsers (except package parsers).

    • Fixed an issue where error messages would show wrong input.

    • Fixed an issue on sandbox renaming, that would allow you to rename a sandbox and end up in a bad state.

    • Fixed an issue where a digest node could be unable to rejoin the cluster after being shut down if all other digest nodes were also down at the time.

    • Changed default package type to "application" on the export package wizard.

    • Fixed styling issue on the search page where long errors would overflow the screen.

    • Removed a spurious warning log when requesting a non-existent hash file from S3.

    • Prevent unauthorized analytics requests being sent.

    • Fixed a number of stability issues with the event redaction job.

    • Fixed an issue where the web client could start queries from the beginning of time when scrolling backwards through events in the UI.

    • Fixed an issue where the segment merger would write that the current node had a segment slightly before registering that segment in the local node.

    • Fixed a bug where offsets from one Kafka partition could be used when deciding where to start consuming for another partition, in the case where there were too many datasources in the repo. This led to a crash loop when the affected node was restarted.

    • Fixed an issue where a failing event forwarder would be cached indefinitely and could negatively impact Humio performance.

    • When checking if the ViewAction.ChangeS3ArchivingSettings action is allowed (with e.g. SearchDomain.isActionAllowed), the answer will now be false if checked on a view, as the action only makes sense on repositories.

    • Errors on alerts, which are shown in the alert overview, are now only cleared when either the alert query is updated by a user or the alert triggers. Previously, errors that occurred when actions were triggered would be removed when the alert no longer triggered. Now, they will be displayed until the actions trigger successfully. On the other hand, errors that occur when running the query may now remain until the next time the alert triggers, where they would previously be removed as soon as the query run again without errors. This change was reverted in 1.33.1.

    • Fixed an issue where some regexes could not be used.

    • Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.

    • Crash the node if any of a number of critical threads die. This should help prevent zombie nodes.

    • Fixed incorrect results when searching through saved queries and recent queries.

    • Fixed a bug with the cache not being populated between restarts on single node clusters.

    • When creating or updating an action, the backend now verifies that the host url associated with the action is prefixed with either http:// or https://. This affects Actions of the type: Webhook, OpsGenie, Single-Channel Slack and VictorOps.

    • Fixed a bug where only part of the Users page was loading when navigating from the All organizations page.

    • Fixed an issue where a dashboard installed with a YAML file could be slightly different than what was specified in the file.

    • Fixed an issue where a scheduled search failed and was retried, if it had multiple actions and at least one action was unknown to Humio. Now, the unknown action is logged, but the scheduled search completes successfully and continues to the next scheduled run.

    • Fixed an edge case where Humio might create multiple copies of the same datasource when the number of Kafka partitions is changed. The fix ensures only one copy will be created.

    • Fixed an issue where comments spanning multiple lines wouldn't be colored correctly.

    • Fixed an issue on on-prem trial license that would use user count limits from cloud.

    • Changed field type for zip codes.

    • Fixed an issue when adding a group to a repository or view than an error message is displayed when the user is not the organization owner or root.

    • Changes to the state of IOC access on organizations are now reflected in the audit log.

    • Remove the ability to create ingest tokens and ingest listeners on system repositories.

    • Fixed a bug where invalid UTF-16 characters could not be ingested. They are now converted to 'ufffd'.

    • Addressed an issue causing Humio to sometimes error log an ArrayIndexOutOfBoundsException during shutdown.

    • Fixed an issue where missing undersized segments in a datasource might cause Humio to repeatedly transfer undersized segments between nodes.

    • Temporary fix of issue with live queries not having first aggregator as bucket() or timeChart(), but then later in the query having those as a second aggregator. As a temporary fix, such queries will fail. In later releases, this will get fixed more properly.

    • Browser storage is now cleared when initiating while unauthenticated.

    • Fixed a bug where query coordination partitions would not get updated.

    • Fixed an issue where the SegmentMoverJob could delete the local copy of a segment, if a pending download of the segment failed the CRC check. The job will now keep the downloaded file at a temporary path until the CRC check completes, to avoid deleting a local copy created by other jobs, e.g. by bucket downloads.

    • Fixed some widgets on dashboards reporting errors while waiting for data to load.

    • When checking if the ViewAction.ChangeRepoConnections action is allowed (with e.g. SearchDomain.isActionAllowed), the answer will now be false if checked on a repository, as the action only makes sense on views.

    • No longer return the "Query Plan" in responses, but return a hash in the new field hashedQueryOnView instead. The plan could leak information not otherwise visible to the user, such as query prefixes being applied.

    • When a digester fails to start, rather than restarting the JVM as previous releases did, keep retrying to start assuming that the issue is transient, such as data for a single ingest partition being unavailable for a short while. While in this situation the process reports the metric for ingest latency on the affected partitions as being uptime of the JVM process at this point. The idea is to signal that data is not flowing on those partitions, so that a monitored metric can raise an alarm somewhere. In lack of a proper latency in this situation, it's better to have a growing non-zero metrics than having the metrics being zero.

    • Changes to the state of backend feature flags are now reflected in the audit log.

    • Fixed an issue where release notes would not close when a release is open.

    • Fixed an issue where the segment merger could mishandle errors during merge.

    • Fixed an issue causing Humio running on Java 16+ to return incorrect search results when the input query contains Unicode surrogate pairs (e.g. when searching for an emoji).

    • Fixed a bug where shared lookup files could not be downloaded from the UI.

    • Removed error query param from URL when entering Humio.

    • Fixed an issue where OIDC without a discovery endpoint would fail to configure if OIDC_TOKEN_ENDPOINT_AUTH_METHOD was not set.

    • Fixed an issue where certain problems would highlight the first word in a query.

    • Include view+parser-name in thread dumps when time is spent inside a parser.

Humio Server 1.32.8 LTS (2022-03-09)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.32.8LTS2022-03-09

Cloud

2022-10-31No1.16.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.32.8/server-1.32.8.tar.gz

These notes include entries from the following previous releases: 1.32.0, 1.32.1, 1.32.2, 1.32.3, 1.32.4, 1.32.5, 1.32.6, 1.32.7

Updated dependencies with security fixes and weakness and improved performance.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • Deprecates the two GraphQL fields id and contentHash on the File type. The two fields are considered unused, so no alternatives are provided. If you rely on them, please let us know.

  • Deprecates the copyFile GraphQL mutation, as it is no longer used. If you use this mutation, please let us know.

New features and improvements

  • UI Changes

    • Updated the style of the email action template and made the wording used dependent on whether an alert or scheduled search was triggered.

    • Breadcrumbs are aligned across all pages and show the package name with a link when viewing or editing an asset from a package.

    • The left navigation menu hides, and can be opened again, for mobile devices, on organization settings pages and repository settings pages.

    • Cluster management pages style updates.

    • Fixed some styling issue on Query Quotas page.

    • The signup path was removed, together with the corresponding pages.

    • Updated design for Package Marketplace and Installed Packages to make them easier to use and more consistent.

    • Removed the pop-up link to edit an alert or scheduled search when on the form page. This link is only relevant when creating an entity from the search page via a dialog.

    • Identity provider pages style update.

  • GraphQL API

    • Added information about the use of preview fields in the result from calling the GraphQL API. The information will be in the field extensions.preview and will be a list of objects with a name and reason field.

    • The GraphQL DateTime type now supports non-UTC time. Timestamps like 2021-07-18T14:13:09.517+02.0 are now legal, and will be converted to UTC time internally.

    • When using the GraphQL field allowedViewActions, the two previously deprecated actions ChangeAlertsAndNotifiers and ReadEvents are no longer returned. Look for their replacements ChangeTriggersAndActions and ReadContents instead.

    • Deprecates the installPackageFromRegistry and updatePackageFromRegistry GraphQL mutations in favor of installPackageFromRegistryV2 and updatePackageFromRegistryV2.

    • The name, displayName, and location GraphQL fields on the File type are deprecated in favor of the new nameAndPath field.

    • The fileName, displayName, and location GraphQL fields on the UploadedFileSnapshot type are deprecated in favor of the new nameAndPath field.

    • Deprecates the package field on the SearchDomain GraphQL type, in favor of packageV2. The new field has a simpler and more correct return type.

    • Added a GraphQL mutation cancelDeleteEvents that allows cancelling a previously submitted deletion. Cancellation is best-effort, and events that have already been deleted will not be restored.

    • Extended 'Relative' field type for schema files to include support for the value 'now'.

  • Configuration

    • Added compatibility mode for using IBM Cloud Object Storage as bucket storage via S3_STORAGE_IBM_COMPAT

    • The Scheduled Searches feature is no longer in beta and can be used by all users without enabling it first

    • On a node configured as USING_EPHEMERAL_DISKS=true allow the local disk management deleting files even if a query may need them later, as the system is able to re-fetch the files from bucket storage when required. This improves the situation when there are active queries that in total have requested access to more segments than the local disk can hold.

  • Functions

  • Other

    • Added focus states to text field, selection and text area components.

    • Added support for importing packages with CSV and JSON files. Exporting packages with files is not fully supported yet, but will be in a future release.

    • Improved handling of local disk space relative to LOCAL_STORAGE_MIN_AGE_DAYS. When the local disk would overflow by respecting that config, Humio can now delete the oldest local segments that are present in bucket storage, even when they are within that time range.

    • Raise size limit on ingest requests from 8MB to 1 GB

    • Scheduled search "schedule" is explained using human readable text such as "At 9.30 on Tuesdays".

    • Improved search for users page.

    • Package installation error messages are now much more readable.

    • Limit pending ingest requests by rejecting excess invocations. Rejections are signalled as status 429 "Too many requests" and a Retry-After header suggesting to retry in 5 seconds. Limiting starts when queued requests exceed INGEST_REQUEST_LIMIT_PCT of the total heap size, default is 5.

    • Warnings when running scheduled searches now show up as errors in the scheduled search overview page if SCHEDULED_SEARCH_DESPITE_WARNINGS is set to false (the default).

    • Added a Data subprocessors page under account.

    • Improved audit log for organization creation.

    • Added maximum width to tabs on the Group page, so they do not keep expanding forever.

    • Humio docker images is now based on the Alpine linux.

    • New metric: "ingest-request-delay". Histogram of ingest request time spent being delayed due to exceeding limit on concurrent processing of ingest.

    • Added explicit distribution information for elastic bulk API for elasticsearch API compatibility.

    • Allow launching using JDK-16.

    • The test action functionality no longer uses alert terminology, as actions can be invoked from both alerts and scheduled searches. Also, it is now possible to also test the scheduled search specific message templates using it.

    • Improved error handling when running scheduled searches, so that a failed schedules search will be retried as long as it is within the Backfill Limit.

    • Added loading and error states to the page where user selects to create a new repository or view.

    • When selecting actions for alerts or scheduled searches, the actions are now grouped by the package they were imported from.

    • Fixed an issue with using the browser back button while "advanced editing" the query text of a scheduled search or an alert would hide the blue bar that allows saving the query.

    • Added support for including dashboard and alert labels when exporting a package.

    • Scheduled search "schedule" field is now validated, showing accurate help for each part of the crontab expression.

    • You can now export and import packages containing any of the action types: Webhook, Email, Humio Repo, Pager Duty, Slack, Slack multi channel, Ops Genie and Victor Ops.

    • Added Dark Mode for Query Monitor page.

Fixed in this release

  • Security

    • Updated dependencies to Akka to fix CVE-2021-42697.

    • Updated dependencies to address a critical security vulnerability for the log4j logging framework, "log4shell", (CVE-2021-44228).

    • Updated dependencies to Netty to fix CVE-2021-43797

    • Fixed a compatibility issue with Filebeat 7.16.0

    • Updated dependencies to address a critical security vulnerability for the log4j logging framework, "log4shell", (CVE-2021-44228).

    • Updated dependencies to log4j 2.16 to remove of message lookups (CVE-2021-45046)

    • Updated dependencies to log4j 2.17.1 to fix CVE-2021-44832 and CVE-2021-45105

    • Updated dependencies to jawn to fix CVE-2022-21653.

  • Summary

    • Fixed an issue where queries of the form #someTagField != someValue ... would sometimes produce incorrect results.

    • Performance improvements of Ingest and internal caching.

    • Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.

    • Fixed an issue that would result in a query not completing when one of the involved segments was deleted locally while the query was running. This could happen on clusters using bucket storage with more data than fits the local disks.

    • Updated dependencies to Jackson to fix a weakness

    • Fixes issue with epoch and offsets not always being stripped from segments.

    • Security fix.

    • Removed a spurious warning log when requesting a non-existent hash file from S3.

    • Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.

    • Fixed issue where streaming (exporting) query results in JSON format could include extra "," characters within the output.

    • It is now possible to ingest logs into Humio using LogStash v.7.13 and upwards.

    • Updated a dependency to a version fixing a critical bug.

  • Documentation

    • Updated the examples on how to use the match() query function in the online documentation.

  • Automation and Alerts

    • Fixed a bug which potentially have caused alerts to not re-fire after the throttle period for field-based throttling had passed.

  • Functions

    • Fixed an issue where top() with max= can yield the same key multiple times (for example ...| top([queryId, query], max=totalSize)).

    • Fixed an issue with the split() function which caused incorrect (usually, too few) query results in some cases where the output fields were refered to later in the query.

  • Other

    • Fixed an issue where the global consistency check job would fail to perform the consistency check, instead logging lines like "Global dump requested but global had expired". This line can still occur, but only when the consistency check takes too long.

    • Amended an internal limit on how many segments can be fetched from bucket storage concurrently. The old limit was based on the number of running queries. The new limit is 32.

    • Fixed an issue where, looking at GraphiQL, the dropdown from the navigation menu was partially hidden.

    • Fixed an issue that could cause cluster nodes to crash when growing the number of digest partitions.

    • Fixed an issue where new groups added to a repository got a query prefix that disallowed search. The default is now to allow search with the queryprefix *.

    • Fixed an issue that caused some errors to be hidden behind a message about "internal error".

    • Reworded a confusing error message when using the top() function with a limit parameter exceeding the limits configured with TOP_K_MAX_MAP_SIZE_HISTORICAL or TOP_K_MAX_MAP_SIZE_LIVE.

    • Fixed an issue that could cause UploadedFileSyncJob to crash if an uploaded file went missing.

    • Updated Slack action for messaging multiple channels, so it propagates errors when triggered. Previously errors were ignored.

    • Truncate long user names on the Users page.

    • Fixed a bug where a 404 Not Found status on an internal endpoint would be incorrectly reported as an 401 Unauthorized.

    • Fixed an issue where Humio would retain segments acquired from read-only buckets if those segments were deleted. Humio will now properly delete the segments locally, and drop the reference to the copy in the read-only bucket.

    • Global snapshots are now uploaded to bucket storage more often when there are a lot of updates to it, leading to shorter replay times on startup.

    • Introduced a check for compatibility for packages and humio versions.

    • Security when viewing installed packages and packages on the marketplace are now less strict. Permissions are still required for installing and uninstalling packages.

    • Fixed an issue where the DiskSpaceJob could continue tracking segments if they were deleted from global, but the files were still present locally.

    • Fixed an issue where certain problems highlighted the first word in a query, not the location of the problem.

    • Creating a new dashboard now opens it after creation.

    • Fixed an issue that caused some metrics of type gauge to be reported with a wrong value.

    • The DiskSpaceJob now removes newly written backfilled segments off the local disk before it chooses to remove non-backfilled segments.

    • Fixed an issue where the {time_zone} Message Templates and Variables for actions would show a full description of the scheduled search instead of only the time zone.

    • Fixed an issue - when creating a repository a user is automatically assigned a role but isn't able to see himself in the roles list. Also, when editing roles the assignment is not counted correctly under usage.

    • Fixed an issue where Humio attempted to fetch global from other nodes before TLS was initialized.

    • Fixed a bug on queries that triggered an error while executing due to the input (such as a regex that exceeds limits on execution time) could result in the client getting 404 as status on poll, where it should get .0.

    • Fixed an issue where Shift+Enter would select the current completion rather than adding a newline.

    • Removed an old Cloud Signups page. The page is not necessary since organizations were implemented for the Cloud environments.

    • Fixed an issue where the DiskSpaceJob could mark segments accessed slightly out of order during boot.

    • Fixed an issue where it was possible to submit queries to the Delete Events API that were not valid for that API. Only pure filtering queries are allowed.

    • When a search is able to filter out segments based on the hash filter files, and a segment file is not present locally on any node, fetch only the hash filter at first, evaluate that, and only if required, fetch the segment file. This speeds up searches that target segments only present in bucket storage and that have search filters that generate hash filter checks, such as regex and literal text comparisons.

    • Fixed a bug where a hidden field named "#humioAutoShard" would sometimes show up in the field list.

    • Split package export page into dialog with multiple steps.

    • Fixed an issue where the job responsible for deleting segment files off nodes was not deleting as many segments as it should.

    • When accessing Humio through a URL with either a repository or view name in it and using an ingest token, it is now checked that the view on the token matches the repository or view in the URL, and a 403 Forbidden status is returned, if not.

    • Fixed an issue where Humio would create a broken hash file for the merge result when merging mini-segments that did not originally have hash files.

    • The DiskSpaceJob no longer initializes based off of the segment last-modified timestamp, this only happens if no access order snapshot is stored locally. If a snapshot is present, we trust that.

    • Fixed a bug causing the disk space job to use an expensive code path even when a cheaper one was available.

    • Fixed an issue where the job responsible for deleting segment files off nodes was not running as often as expected.

    • Cloning an asset now redirects you to the edit page for the asset for all assets.

    • Fixed an issue where the query scheduler would spend too much time "shelving" queries, and not enough on getting them executed, leading to little progress on queries.

    • Fixed an issue where metrics of type gauge with a double value were not reported to the humio-metricsrepository, but only to the humio repository.

    • Fixed thread safety for a variable involved in fetching from bucket storage for queries.

    • Updated the new asset dialog button text so that it will say 'Continue' when an asset will not be created directly.

    • Updated Elastic ingest endpoint to accept 'create' operations in addition to 'index' operations. Both operation types result in the same ingest behavior. This update was added as Fluent-Bit v1.8.3 began using the 'create' operation rather than 'index' for ingest.

    • Fixed an issue where Humio would create auxiliary files (hash files) for segments unnecessarily when moving segments between nodes.

    • Updated dependencies with security fixes.

    • The simple and advanced permission model has been merged, thus allowing users who were using the simple permission model to create their own permission roles and groups, create groups with default roles, and all other features that were previously only available in advanced permissions mode.

Humio Server 1.32.7 LTS (2022-01-06)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.32.7LTS2022-01-06

Cloud

2022-10-31No1.16.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.32.7/server-1.32.7.tar.gz

These notes include entries from the following previous releases: 1.32.0, 1.32.1, 1.32.2, 1.32.3, 1.32.4, 1.32.5, 1.32.6

Updated dependencies with security fixes and weakness.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • Deprecates the two GraphQL fields id and contentHash on the File type. The two fields are considered unused, so no alternatives are provided. If you rely on them, please let us know.

  • Deprecates the copyFile GraphQL mutation, as it is no longer used. If you use this mutation, please let us know.

New features and improvements

  • UI Changes

    • Updated the style of the email action template and made the wording used dependent on whether an alert or scheduled search was triggered.

    • Breadcrumbs are aligned across all pages and show the package name with a link when viewing or editing an asset from a package.

    • The left navigation menu hides, and can be opened again, for mobile devices, on organization settings pages and repository settings pages.

    • Cluster management pages style updates.

    • Fixed some styling issue on Query Quotas page.

    • The signup path was removed, together with the corresponding pages.

    • Updated design for Package Marketplace and Installed Packages to make them easier to use and more consistent.

    • Removed the pop-up link to edit an alert or scheduled search when on the form page. This link is only relevant when creating an entity from the search page via a dialog.

    • Identity provider pages style update.

  • GraphQL API

    • Added information about the use of preview fields in the result from calling the GraphQL API. The information will be in the field extensions.preview and will be a list of objects with a name and reason field.

    • The GraphQL DateTime type now supports non-UTC time. Timestamps like 2021-07-18T14:13:09.517+02.0 are now legal, and will be converted to UTC time internally.

    • When using the GraphQL field allowedViewActions, the two previously deprecated actions ChangeAlertsAndNotifiers and ReadEvents are no longer returned. Look for their replacements ChangeTriggersAndActions and ReadContents instead.

    • Deprecates the installPackageFromRegistry and updatePackageFromRegistry GraphQL mutations in favor of installPackageFromRegistryV2 and updatePackageFromRegistryV2.

    • The name, displayName, and location GraphQL fields on the File type are deprecated in favor of the new nameAndPath field.

    • The fileName, displayName, and location GraphQL fields on the UploadedFileSnapshot type are deprecated in favor of the new nameAndPath field.

    • Deprecates the package field on the SearchDomain GraphQL type, in favor of packageV2. The new field has a simpler and more correct return type.

    • Added a GraphQL mutation cancelDeleteEvents that allows cancelling a previously submitted deletion. Cancellation is best-effort, and events that have already been deleted will not be restored.

    • Extended 'Relative' field type for schema files to include support for the value 'now'.

  • Configuration

    • Added compatibility mode for using IBM Cloud Object Storage as bucket storage via S3_STORAGE_IBM_COMPAT

    • The Scheduled Searches feature is no longer in beta and can be used by all users without enabling it first

    • On a node configured as USING_EPHEMERAL_DISKS=true allow the local disk management deleting files even if a query may need them later, as the system is able to re-fetch the files from bucket storage when required. This improves the situation when there are active queries that in total have requested access to more segments than the local disk can hold.

  • Functions

  • Other

    • Added focus states to text field, selection and text area components.

    • Added support for importing packages with CSV and JSON files. Exporting packages with files is not fully supported yet, but will be in a future release.

    • Improved handling of local disk space relative to LOCAL_STORAGE_MIN_AGE_DAYS. When the local disk would overflow by respecting that config, Humio can now delete the oldest local segments that are present in bucket storage, even when they are within that time range.

    • Raise size limit on ingest requests from 8MB to 1 GB

    • Scheduled search "schedule" is explained using human readable text such as "At 9.30 on Tuesdays".

    • Improved search for users page.

    • Package installation error messages are now much more readable.

    • Limit pending ingest requests by rejecting excess invocations. Rejections are signalled as status 429 "Too many requests" and a Retry-After header suggesting to retry in 5 seconds. Limiting starts when queued requests exceed INGEST_REQUEST_LIMIT_PCT of the total heap size, default is 5.

    • Warnings when running scheduled searches now show up as errors in the scheduled search overview page if SCHEDULED_SEARCH_DESPITE_WARNINGS is set to false (the default).

    • Added a Data subprocessors page under account.

    • Improved audit log for organization creation.

    • Added maximum width to tabs on the Group page, so they do not keep expanding forever.

    • Humio docker images is now based on the Alpine linux.

    • New metric: "ingest-request-delay". Histogram of ingest request time spent being delayed due to exceeding limit on concurrent processing of ingest.

    • Added explicit distribution information for elastic bulk API for elasticsearch API compatibility.

    • Allow launching using JDK-16.

    • The test action functionality no longer uses alert terminology, as actions can be invoked from both alerts and scheduled searches. Also, it is now possible to also test the scheduled search specific message templates using it.

    • Improved error handling when running scheduled searches, so that a failed schedules search will be retried as long as it is within the Backfill Limit.

    • Added loading and error states to the page where user selects to create a new repository or view.

    • When selecting actions for alerts or scheduled searches, the actions are now grouped by the package they were imported from.

    • Fixed an issue with using the browser back button while "advanced editing" the query text of a scheduled search or an alert would hide the blue bar that allows saving the query.

    • Added support for including dashboard and alert labels when exporting a package.

    • Scheduled search "schedule" field is now validated, showing accurate help for each part of the crontab expression.

    • You can now export and import packages containing any of the action types: Webhook, Email, Humio Repo, Pager Duty, Slack, Slack multi channel, Ops Genie and Victor Ops.

    • Added Dark Mode for Query Monitor page.

Fixed in this release

  • Security

    • Updated dependencies to address a critical security vulnerability for the log4j logging framework, "log4shell", (CVE-2021-44228).

    • Updated dependencies to Netty to fix CVE-2021-43797

    • Fixed a compatibility issue with Filebeat 7.16.0

    • Updated dependencies to address a critical security vulnerability for the log4j logging framework, "log4shell", (CVE-2021-44228).

    • Updated dependencies to log4j 2.16 to remove of message lookups (CVE-2021-45046)

    • Updated dependencies to log4j 2.17.1 to fix CVE-2021-44832 and CVE-2021-45105

  • Summary

    • Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.

    • Fixed an issue that would result in a query not completing when one of the involved segments was deleted locally while the query was running. This could happen on clusters using bucket storage with more data than fits the local disks.

    • Updated dependencies to Jackson to fix a weakness

    • Security fix.

    • Removed a spurious warning log when requesting a non-existent hash file from S3.

    • Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.

    • Fixed issue where streaming (exporting) query results in JSON format could include extra "," characters within the output.

    • It is now possible to ingest logs into Humio using LogStash v.7.13 and upwards.

    • Updated a dependency to a version fixing a critical bug.

  • Documentation

    • Updated the examples on how to use the match() query function in the online documentation.

  • Automation and Alerts

    • Fixed a bug which potentially have caused alerts to not re-fire after the throttle period for field-based throttling had passed.

  • Functions

    • Fixed an issue where top() with max= can yield the same key multiple times (for example ...| top([queryId, query], max=totalSize)).

    • Fixed an issue with the split() function which caused incorrect (usually, too few) query results in some cases where the output fields were refered to later in the query.

  • Other

    • Fixed an issue where the global consistency check job would fail to perform the consistency check, instead logging lines like "Global dump requested but global had expired". This line can still occur, but only when the consistency check takes too long.

    • Amended an internal limit on how many segments can be fetched from bucket storage concurrently. The old limit was based on the number of running queries. The new limit is 32.

    • Fixed an issue where, looking at GraphiQL, the dropdown from the navigation menu was partially hidden.

    • Fixed an issue that could cause cluster nodes to crash when growing the number of digest partitions.

    • Fixed an issue where new groups added to a repository got a query prefix that disallowed search. The default is now to allow search with the queryprefix *.

    • Fixed an issue that caused some errors to be hidden behind a message about "internal error".

    • Reworded a confusing error message when using the top() function with a limit parameter exceeding the limits configured with TOP_K_MAX_MAP_SIZE_HISTORICAL or TOP_K_MAX_MAP_SIZE_LIVE.

    • Fixed an issue that could cause UploadedFileSyncJob to crash if an uploaded file went missing.

    • Updated Slack action for messaging multiple channels, so it propagates errors when triggered. Previously errors were ignored.

    • Truncate long user names on the Users page.

    • Fixed a bug where a 404 Not Found status on an internal endpoint would be incorrectly reported as an 401 Unauthorized.

    • Fixed an issue where Humio would retain segments acquired from read-only buckets if those segments were deleted. Humio will now properly delete the segments locally, and drop the reference to the copy in the read-only bucket.

    • Global snapshots are now uploaded to bucket storage more often when there are a lot of updates to it, leading to shorter replay times on startup.

    • Introduced a check for compatibility for packages and humio versions.

    • Security when viewing installed packages and packages on the marketplace are now less strict. Permissions are still required for installing and uninstalling packages.

    • Fixed an issue where the DiskSpaceJob could continue tracking segments if they were deleted from global, but the files were still present locally.

    • Fixed an issue where certain problems highlighted the first word in a query, not the location of the problem.

    • Creating a new dashboard now opens it after creation.

    • Fixed an issue that caused some metrics of type gauge to be reported with a wrong value.

    • The DiskSpaceJob now removes newly written backfilled segments off the local disk before it chooses to remove non-backfilled segments.

    • Fixed an issue where the {time_zone} Message Templates and Variables for actions would show a full description of the scheduled search instead of only the time zone.

    • Fixed an issue - when creating a repository a user is automatically assigned a role but isn't able to see himself in the roles list. Also, when editing roles the assignment is not counted correctly under usage.

    • Fixed an issue where Humio attempted to fetch global from other nodes before TLS was initialized.

    • Fixed a bug on queries that triggered an error while executing due to the input (such as a regex that exceeds limits on execution time) could result in the client getting 404 as status on poll, where it should get .0.

    • Fixed an issue where Shift+Enter would select the current completion rather than adding a newline.

    • Removed an old Cloud Signups page. The page is not necessary since organizations were implemented for the Cloud environments.

    • Fixed an issue where the DiskSpaceJob could mark segments accessed slightly out of order during boot.

    • Fixed an issue where it was possible to submit queries to the Delete Events API that were not valid for that API. Only pure filtering queries are allowed.

    • When a search is able to filter out segments based on the hash filter files, and a segment file is not present locally on any node, fetch only the hash filter at first, evaluate that, and only if required, fetch the segment file. This speeds up searches that target segments only present in bucket storage and that have search filters that generate hash filter checks, such as regex and literal text comparisons.

    • Fixed a bug where a hidden field named "#humioAutoShard" would sometimes show up in the field list.

    • Split package export page into dialog with multiple steps.

    • Fixed an issue where the job responsible for deleting segment files off nodes was not deleting as many segments as it should.

    • When accessing Humio through a URL with either a repository or view name in it and using an ingest token, it is now checked that the view on the token matches the repository or view in the URL, and a 403 Forbidden status is returned, if not.

    • Fixed an issue where Humio would create a broken hash file for the merge result when merging mini-segments that did not originally have hash files.

    • The DiskSpaceJob no longer initializes based off of the segment last-modified timestamp, this only happens if no access order snapshot is stored locally. If a snapshot is present, we trust that.

    • Fixed a bug causing the disk space job to use an expensive code path even when a cheaper one was available.

    • Fixed an issue where the job responsible for deleting segment files off nodes was not running as often as expected.

    • Cloning an asset now redirects you to the edit page for the asset for all assets.

    • Fixed an issue where the query scheduler would spend too much time "shelving" queries, and not enough on getting them executed, leading to little progress on queries.

    • Fixed an issue where metrics of type gauge with a double value were not reported to the humio-metricsrepository, but only to the humio repository.

    • Fixed thread safety for a variable involved in fetching from bucket storage for queries.

    • Updated the new asset dialog button text so that it will say 'Continue' when an asset will not be created directly.

    • Updated Elastic ingest endpoint to accept 'create' operations in addition to 'index' operations. Both operation types result in the same ingest behavior. This update was added as Fluent-Bit v1.8.3 began using the 'create' operation rather than 'index' for ingest.

    • Fixed an issue where Humio would create auxiliary files (hash files) for segments unnecessarily when moving segments between nodes.

    • Updated dependencies with security fixes.

    • The simple and advanced permission model has been merged, thus allowing users who were using the simple permission model to create their own permission roles and groups, create groups with default roles, and all other features that were previously only available in advanced permissions mode.

Humio Server 1.32.6 LTS (2021-12-15)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.32.6LTS2021-12-15

Cloud

2022-10-31No1.16.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.32.6/server-1.32.6.tar.gz

These notes include entries from the following previous releases: 1.32.0, 1.32.1, 1.32.2, 1.32.3, 1.32.4, 1.32.5

Security fix related to log4j logging, and fix compatibility with Filebeat.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • Deprecates the two GraphQL fields id and contentHash on the File type. The two fields are considered unused, so no alternatives are provided. If you rely on them, please let us know.

  • Deprecates the copyFile GraphQL mutation, as it is no longer used. If you use this mutation, please let us know.

New features and improvements

  • UI Changes

    • Updated the style of the email action template and made the wording used dependent on whether an alert or scheduled search was triggered.

    • Breadcrumbs are aligned across all pages and show the package name with a link when viewing or editing an asset from a package.

    • The left navigation menu hides, and can be opened again, for mobile devices, on organization settings pages and repository settings pages.

    • Cluster management pages style updates.

    • Fixed some styling issue on Query Quotas page.

    • The signup path was removed, together with the corresponding pages.

    • Updated design for Package Marketplace and Installed Packages to make them easier to use and more consistent.

    • Removed the pop-up link to edit an alert or scheduled search when on the form page. This link is only relevant when creating an entity from the search page via a dialog.

    • Identity provider pages style update.

  • GraphQL API

    • Added information about the use of preview fields in the result from calling the GraphQL API. The information will be in the field extensions.preview and will be a list of objects with a name and reason field.

    • The GraphQL DateTime type now supports non-UTC time. Timestamps like 2021-07-18T14:13:09.517+02.0 are now legal, and will be converted to UTC time internally.

    • When using the GraphQL field allowedViewActions, the two previously deprecated actions ChangeAlertsAndNotifiers and ReadEvents are no longer returned. Look for their replacements ChangeTriggersAndActions and ReadContents instead.

    • Deprecates the installPackageFromRegistry and updatePackageFromRegistry GraphQL mutations in favor of installPackageFromRegistryV2 and updatePackageFromRegistryV2.

    • The name, displayName, and location GraphQL fields on the File type are deprecated in favor of the new nameAndPath field.

    • The fileName, displayName, and location GraphQL fields on the UploadedFileSnapshot type are deprecated in favor of the new nameAndPath field.

    • Deprecates the package field on the SearchDomain GraphQL type, in favor of packageV2. The new field has a simpler and more correct return type.

    • Added a GraphQL mutation cancelDeleteEvents that allows cancelling a previously submitted deletion. Cancellation is best-effort, and events that have already been deleted will not be restored.

    • Extended 'Relative' field type for schema files to include support for the value 'now'.

  • Configuration

    • Added compatibility mode for using IBM Cloud Object Storage as bucket storage via S3_STORAGE_IBM_COMPAT

    • The Scheduled Searches feature is no longer in beta and can be used by all users without enabling it first

    • On a node configured as USING_EPHEMERAL_DISKS=true allow the local disk management deleting files even if a query may need them later, as the system is able to re-fetch the files from bucket storage when required. This improves the situation when there are active queries that in total have requested access to more segments than the local disk can hold.

  • Functions

  • Other

    • Added focus states to text field, selection and text area components.

    • Added support for importing packages with CSV and JSON files. Exporting packages with files is not fully supported yet, but will be in a future release.

    • Improved handling of local disk space relative to LOCAL_STORAGE_MIN_AGE_DAYS. When the local disk would overflow by respecting that config, Humio can now delete the oldest local segments that are present in bucket storage, even when they are within that time range.

    • Raise size limit on ingest requests from 8MB to 1 GB

    • Scheduled search "schedule" is explained using human readable text such as "At 9.30 on Tuesdays".

    • Improved search for users page.

    • Package installation error messages are now much more readable.

    • Limit pending ingest requests by rejecting excess invocations. Rejections are signalled as status 429 "Too many requests" and a Retry-After header suggesting to retry in 5 seconds. Limiting starts when queued requests exceed INGEST_REQUEST_LIMIT_PCT of the total heap size, default is 5.

    • Warnings when running scheduled searches now show up as errors in the scheduled search overview page if SCHEDULED_SEARCH_DESPITE_WARNINGS is set to false (the default).

    • Added a Data subprocessors page under account.

    • Improved audit log for organization creation.

    • Added maximum width to tabs on the Group page, so they do not keep expanding forever.

    • Humio docker images is now based on the Alpine linux.

    • New metric: "ingest-request-delay". Histogram of ingest request time spent being delayed due to exceeding limit on concurrent processing of ingest.

    • Added explicit distribution information for elastic bulk API for elasticsearch API compatibility.

    • Allow launching using JDK-16.

    • The test action functionality no longer uses alert terminology, as actions can be invoked from both alerts and scheduled searches. Also, it is now possible to also test the scheduled search specific message templates using it.

    • Improved error handling when running scheduled searches, so that a failed schedules search will be retried as long as it is within the Backfill Limit.

    • Added loading and error states to the page where user selects to create a new repository or view.

    • When selecting actions for alerts or scheduled searches, the actions are now grouped by the package they were imported from.

    • Fixed an issue with using the browser back button while "advanced editing" the query text of a scheduled search or an alert would hide the blue bar that allows saving the query.

    • Added support for including dashboard and alert labels when exporting a package.

    • Scheduled search "schedule" field is now validated, showing accurate help for each part of the crontab expression.

    • You can now export and import packages containing any of the action types: Webhook, Email, Humio Repo, Pager Duty, Slack, Slack multi channel, Ops Genie and Victor Ops.

    • Added Dark Mode for Query Monitor page.

Fixed in this release

  • Security

    • Updated dependencies to address a critical security vulnerability for the log4j logging framework, "log4shell", (CVE-2021-44228).

    • Fixed a compatibility issue with Filebeat 7.16.0

    • Updated dependencies to address a critical security vulnerability for the log4j logging framework, "log4shell", (CVE-2021-44228).

    • Updated dependencies to log4j 2.16 to remove of message lookups (CVE-2021-45046)

  • Summary

    • Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.

    • Fixed an issue that would result in a query not completing when one of the involved segments was deleted locally while the query was running. This could happen on clusters using bucket storage with more data than fits the local disks.

    • Security fix.

    • Removed a spurious warning log when requesting a non-existent hash file from S3.

    • Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.

    • Fixed issue where streaming (exporting) query results in JSON format could include extra "," characters within the output.

    • It is now possible to ingest logs into Humio using LogStash v.7.13 and upwards.

    • Updated a dependency to a version fixing a critical bug.

  • Documentation

    • Updated the examples on how to use the match() query function in the online documentation.

  • Automation and Alerts

    • Fixed a bug which potentially have caused alerts to not re-fire after the throttle period for field-based throttling had passed.

  • Functions

    • Fixed an issue where top() with max= can yield the same key multiple times (for example ...| top([queryId, query], max=totalSize)).

    • Fixed an issue with the split() function which caused incorrect (usually, too few) query results in some cases where the output fields were refered to later in the query.

  • Other

    • Fixed an issue where the global consistency check job would fail to perform the consistency check, instead logging lines like "Global dump requested but global had expired". This line can still occur, but only when the consistency check takes too long.

    • Amended an internal limit on how many segments can be fetched from bucket storage concurrently. The old limit was based on the number of running queries. The new limit is 32.

    • Fixed an issue where, looking at GraphiQL, the dropdown from the navigation menu was partially hidden.

    • Fixed an issue that could cause cluster nodes to crash when growing the number of digest partitions.

    • Fixed an issue where new groups added to a repository got a query prefix that disallowed search. The default is now to allow search with the queryprefix *.

    • Fixed an issue that caused some errors to be hidden behind a message about "internal error".

    • Reworded a confusing error message when using the top() function with a limit parameter exceeding the limits configured with TOP_K_MAX_MAP_SIZE_HISTORICAL or TOP_K_MAX_MAP_SIZE_LIVE.

    • Fixed an issue that could cause UploadedFileSyncJob to crash if an uploaded file went missing.

    • Updated Slack action for messaging multiple channels, so it propagates errors when triggered. Previously errors were ignored.

    • Truncate long user names on the Users page.

    • Fixed a bug where a 404 Not Found status on an internal endpoint would be incorrectly reported as an 401 Unauthorized.

    • Fixed an issue where Humio would retain segments acquired from read-only buckets if those segments were deleted. Humio will now properly delete the segments locally, and drop the reference to the copy in the read-only bucket.

    • Global snapshots are now uploaded to bucket storage more often when there are a lot of updates to it, leading to shorter replay times on startup.

    • Introduced a check for compatibility for packages and humio versions.

    • Security when viewing installed packages and packages on the marketplace are now less strict. Permissions are still required for installing and uninstalling packages.

    • Fixed an issue where the DiskSpaceJob could continue tracking segments if they were deleted from global, but the files were still present locally.

    • Fixed an issue where certain problems highlighted the first word in a query, not the location of the problem.

    • Creating a new dashboard now opens it after creation.

    • Fixed an issue that caused some metrics of type gauge to be reported with a wrong value.

    • The DiskSpaceJob now removes newly written backfilled segments off the local disk before it chooses to remove non-backfilled segments.

    • Fixed an issue where the {time_zone} Message Templates and Variables for actions would show a full description of the scheduled search instead of only the time zone.

    • Fixed an issue - when creating a repository a user is automatically assigned a role but isn't able to see himself in the roles list. Also, when editing roles the assignment is not counted correctly under usage.

    • Fixed an issue where Humio attempted to fetch global from other nodes before TLS was initialized.

    • Fixed a bug on queries that triggered an error while executing due to the input (such as a regex that exceeds limits on execution time) could result in the client getting 404 as status on poll, where it should get .0.

    • Fixed an issue where Shift+Enter would select the current completion rather than adding a newline.

    • Removed an old Cloud Signups page. The page is not necessary since organizations were implemented for the Cloud environments.

    • Fixed an issue where the DiskSpaceJob could mark segments accessed slightly out of order during boot.

    • Fixed an issue where it was possible to submit queries to the Delete Events API that were not valid for that API. Only pure filtering queries are allowed.

    • When a search is able to filter out segments based on the hash filter files, and a segment file is not present locally on any node, fetch only the hash filter at first, evaluate that, and only if required, fetch the segment file. This speeds up searches that target segments only present in bucket storage and that have search filters that generate hash filter checks, such as regex and literal text comparisons.

    • Fixed a bug where a hidden field named "#humioAutoShard" would sometimes show up in the field list.

    • Split package export page into dialog with multiple steps.

    • Fixed an issue where the job responsible for deleting segment files off nodes was not deleting as many segments as it should.

    • When accessing Humio through a URL with either a repository or view name in it and using an ingest token, it is now checked that the view on the token matches the repository or view in the URL, and a 403 Forbidden status is returned, if not.

    • Fixed an issue where Humio would create a broken hash file for the merge result when merging mini-segments that did not originally have hash files.

    • The DiskSpaceJob no longer initializes based off of the segment last-modified timestamp, this only happens if no access order snapshot is stored locally. If a snapshot is present, we trust that.

    • Fixed a bug causing the disk space job to use an expensive code path even when a cheaper one was available.

    • Fixed an issue where the job responsible for deleting segment files off nodes was not running as often as expected.

    • Cloning an asset now redirects you to the edit page for the asset for all assets.

    • Fixed an issue where the query scheduler would spend too much time "shelving" queries, and not enough on getting them executed, leading to little progress on queries.

    • Fixed an issue where metrics of type gauge with a double value were not reported to the humio-metricsrepository, but only to the humio repository.

    • Fixed thread safety for a variable involved in fetching from bucket storage for queries.

    • Updated the new asset dialog button text so that it will say 'Continue' when an asset will not be created directly.

    • Updated Elastic ingest endpoint to accept 'create' operations in addition to 'index' operations. Both operation types result in the same ingest behavior. This update was added as Fluent-Bit v1.8.3 began using the 'create' operation rather than 'index' for ingest.

    • Fixed an issue where Humio would create auxiliary files (hash files) for segments unnecessarily when moving segments between nodes.

    • Updated dependencies with security fixes.

    • The simple and advanced permission model has been merged, thus allowing users who were using the simple permission model to create their own permission roles and groups, create groups with default roles, and all other features that were previously only available in advanced permissions mode.

Humio Server 1.32.5 LTS (2021-12-10)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.32.5LTS2021-12-10

Cloud

2022-10-31No1.16.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.32.5/server-1.32.5.tar.gz

These notes include entries from the following previous releases: 1.32.0, 1.32.1, 1.32.2, 1.32.3, 1.32.4

Security fix related to log4j logging.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • Deprecates the two GraphQL fields id and contentHash on the File type. The two fields are considered unused, so no alternatives are provided. If you rely on them, please let us know.

  • Deprecates the copyFile GraphQL mutation, as it is no longer used. If you use this mutation, please let us know.

New features and improvements

  • UI Changes

    • Updated the style of the email action template and made the wording used dependent on whether an alert or scheduled search was triggered.

    • Breadcrumbs are aligned across all pages and show the package name with a link when viewing or editing an asset from a package.

    • The left navigation menu hides, and can be opened again, for mobile devices, on organization settings pages and repository settings pages.

    • Cluster management pages style updates.

    • Fixed some styling issue on Query Quotas page.

    • The signup path was removed, together with the corresponding pages.

    • Updated design for Package Marketplace and Installed Packages to make them easier to use and more consistent.

    • Removed the pop-up link to edit an alert or scheduled search when on the form page. This link is only relevant when creating an entity from the search page via a dialog.

    • Identity provider pages style update.

  • GraphQL API

    • Added information about the use of preview fields in the result from calling the GraphQL API. The information will be in the field extensions.preview and will be a list of objects with a name and reason field.

    • The GraphQL DateTime type now supports non-UTC time. Timestamps like 2021-07-18T14:13:09.517+02.0 are now legal, and will be converted to UTC time internally.

    • When using the GraphQL field allowedViewActions, the two previously deprecated actions ChangeAlertsAndNotifiers and ReadEvents are no longer returned. Look for their replacements ChangeTriggersAndActions and ReadContents instead.

    • Deprecates the installPackageFromRegistry and updatePackageFromRegistry GraphQL mutations in favor of installPackageFromRegistryV2 and updatePackageFromRegistryV2.

    • The name, displayName, and location GraphQL fields on the File type are deprecated in favor of the new nameAndPath field.

    • The fileName, displayName, and location GraphQL fields on the UploadedFileSnapshot type are deprecated in favor of the new nameAndPath field.

    • Deprecates the package field on the SearchDomain GraphQL type, in favor of packageV2. The new field has a simpler and more correct return type.

    • Added a GraphQL mutation cancelDeleteEvents that allows cancelling a previously submitted deletion. Cancellation is best-effort, and events that have already been deleted will not be restored.

    • Extended 'Relative' field type for schema files to include support for the value 'now'.

  • Configuration

    • Added compatibility mode for using IBM Cloud Object Storage as bucket storage via S3_STORAGE_IBM_COMPAT

    • The Scheduled Searches feature is no longer in beta and can be used by all users without enabling it first

    • On a node configured as USING_EPHEMERAL_DISKS=true allow the local disk management deleting files even if a query may need them later, as the system is able to re-fetch the files from bucket storage when required. This improves the situation when there are active queries that in total have requested access to more segments than the local disk can hold.

  • Functions

  • Other

    • Added focus states to text field, selection and text area components.

    • Added support for importing packages with CSV and JSON files. Exporting packages with files is not fully supported yet, but will be in a future release.

    • Improved handling of local disk space relative to LOCAL_STORAGE_MIN_AGE_DAYS. When the local disk would overflow by respecting that config, Humio can now delete the oldest local segments that are present in bucket storage, even when they are within that time range.

    • Raise size limit on ingest requests from 8MB to 1 GB

    • Scheduled search "schedule" is explained using human readable text such as "At 9.30 on Tuesdays".

    • Improved search for users page.

    • Package installation error messages are now much more readable.

    • Limit pending ingest requests by rejecting excess invocations. Rejections are signalled as status 429 "Too many requests" and a Retry-After header suggesting to retry in 5 seconds. Limiting starts when queued requests exceed INGEST_REQUEST_LIMIT_PCT of the total heap size, default is 5.

    • Warnings when running scheduled searches now show up as errors in the scheduled search overview page if SCHEDULED_SEARCH_DESPITE_WARNINGS is set to false (the default).

    • Added a Data subprocessors page under account.

    • Improved audit log for organization creation.

    • Added maximum width to tabs on the Group page, so they do not keep expanding forever.

    • Humio docker images is now based on the Alpine linux.

    • New metric: "ingest-request-delay". Histogram of ingest request time spent being delayed due to exceeding limit on concurrent processing of ingest.

    • Added explicit distribution information for elastic bulk API for elasticsearch API compatibility.

    • Allow launching using JDK-16.

    • The test action functionality no longer uses alert terminology, as actions can be invoked from both alerts and scheduled searches. Also, it is now possible to also test the scheduled search specific message templates using it.

    • Improved error handling when running scheduled searches, so that a failed schedules search will be retried as long as it is within the Backfill Limit.

    • Added loading and error states to the page where user selects to create a new repository or view.

    • When selecting actions for alerts or scheduled searches, the actions are now grouped by the package they were imported from.

    • Fixed an issue with using the browser back button while "advanced editing" the query text of a scheduled search or an alert would hide the blue bar that allows saving the query.

    • Added support for including dashboard and alert labels when exporting a package.

    • Scheduled search "schedule" field is now validated, showing accurate help for each part of the crontab expression.

    • You can now export and import packages containing any of the action types: Webhook, Email, Humio Repo, Pager Duty, Slack, Slack multi channel, Ops Genie and Victor Ops.

    • Added Dark Mode for Query Monitor page.

Fixed in this release

  • Security

    • Updated dependencies to address a critical security vulnerability for the log4j logging framework, "log4shell", (CVE-2021-44228).

    • Fixed a compatibility issue with Filebeat 7.16.0

    • Updated dependencies to address a critical security vulnerability for the log4j logging framework, "log4shell", (CVE-2021-44228).

  • Summary

    • Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.

    • Fixed an issue that would result in a query not completing when one of the involved segments was deleted locally while the query was running. This could happen on clusters using bucket storage with more data than fits the local disks.

    • Security fix.

    • Removed a spurious warning log when requesting a non-existent hash file from S3.

    • Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.

    • It is now possible to ingest logs into Humio using LogStash v.7.13 and upwards.

    • Updated a dependency to a version fixing a critical bug.

  • Documentation

    • Updated the examples on how to use the match() query function in the online documentation.

  • Automation and Alerts

    • Fixed a bug which potentially have caused alerts to not re-fire after the throttle period for field-based throttling had passed.

  • Functions

    • Fixed an issue where top() with max= can yield the same key multiple times (for example ...| top([queryId, query], max=totalSize)).

    • Fixed an issue with the split() function which caused incorrect (usually, too few) query results in some cases where the output fields were refered to later in the query.

  • Other

    • Fixed an issue where the global consistency check job would fail to perform the consistency check, instead logging lines like "Global dump requested but global had expired". This line can still occur, but only when the consistency check takes too long.

    • Amended an internal limit on how many segments can be fetched from bucket storage concurrently. The old limit was based on the number of running queries. The new limit is 32.

    • Fixed an issue where, looking at GraphiQL, the dropdown from the navigation menu was partially hidden.

    • Fixed an issue that could cause cluster nodes to crash when growing the number of digest partitions.

    • Fixed an issue where new groups added to a repository got a query prefix that disallowed search. The default is now to allow search with the queryprefix *.

    • Fixed an issue that caused some errors to be hidden behind a message about "internal error".

    • Reworded a confusing error message when using the top() function with a limit parameter exceeding the limits configured with TOP_K_MAX_MAP_SIZE_HISTORICAL or TOP_K_MAX_MAP_SIZE_LIVE.

    • Fixed an issue that could cause UploadedFileSyncJob to crash if an uploaded file went missing.

    • Updated Slack action for messaging multiple channels, so it propagates errors when triggered. Previously errors were ignored.

    • Truncate long user names on the Users page.

    • Fixed a bug where a 404 Not Found status on an internal endpoint would be incorrectly reported as an 401 Unauthorized.

    • Fixed an issue where Humio would retain segments acquired from read-only buckets if those segments were deleted. Humio will now properly delete the segments locally, and drop the reference to the copy in the read-only bucket.

    • Global snapshots are now uploaded to bucket storage more often when there are a lot of updates to it, leading to shorter replay times on startup.

    • Introduced a check for compatibility for packages and humio versions.

    • Security when viewing installed packages and packages on the marketplace are now less strict. Permissions are still required for installing and uninstalling packages.

    • Fixed an issue where the DiskSpaceJob could continue tracking segments if they were deleted from global, but the files were still present locally.

    • Fixed an issue where certain problems highlighted the first word in a query, not the location of the problem.

    • Creating a new dashboard now opens it after creation.

    • Fixed an issue that caused some metrics of type gauge to be reported with a wrong value.

    • The DiskSpaceJob now removes newly written backfilled segments off the local disk before it chooses to remove non-backfilled segments.

    • Fixed an issue where the {time_zone} Message Templates and Variables for actions would show a full description of the scheduled search instead of only the time zone.

    • Fixed an issue - when creating a repository a user is automatically assigned a role but isn't able to see himself in the roles list. Also, when editing roles the assignment is not counted correctly under usage.

    • Fixed an issue where Humio attempted to fetch global from other nodes before TLS was initialized.

    • Fixed a bug on queries that triggered an error while executing due to the input (such as a regex that exceeds limits on execution time) could result in the client getting 404 as status on poll, where it should get .0.

    • Fixed an issue where Shift+Enter would select the current completion rather than adding a newline.

    • Removed an old Cloud Signups page. The page is not necessary since organizations were implemented for the Cloud environments.

    • Fixed an issue where the DiskSpaceJob could mark segments accessed slightly out of order during boot.

    • Fixed an issue where it was possible to submit queries to the Delete Events API that were not valid for that API. Only pure filtering queries are allowed.

    • When a search is able to filter out segments based on the hash filter files, and a segment file is not present locally on any node, fetch only the hash filter at first, evaluate that, and only if required, fetch the segment file. This speeds up searches that target segments only present in bucket storage and that have search filters that generate hash filter checks, such as regex and literal text comparisons.

    • Fixed a bug where a hidden field named "#humioAutoShard" would sometimes show up in the field list.

    • Split package export page into dialog with multiple steps.

    • Fixed an issue where the job responsible for deleting segment files off nodes was not deleting as many segments as it should.

    • When accessing Humio through a URL with either a repository or view name in it and using an ingest token, it is now checked that the view on the token matches the repository or view in the URL, and a 403 Forbidden status is returned, if not.

    • Fixed an issue where Humio would create a broken hash file for the merge result when merging mini-segments that did not originally have hash files.

    • The DiskSpaceJob no longer initializes based off of the segment last-modified timestamp, this only happens if no access order snapshot is stored locally. If a snapshot is present, we trust that.

    • Fixed a bug causing the disk space job to use an expensive code path even when a cheaper one was available.

    • Fixed an issue where the job responsible for deleting segment files off nodes was not running as often as expected.

    • Cloning an asset now redirects you to the edit page for the asset for all assets.

    • Fixed an issue where the query scheduler would spend too much time "shelving" queries, and not enough on getting them executed, leading to little progress on queries.

    • Fixed an issue where metrics of type gauge with a double value were not reported to the humio-metricsrepository, but only to the humio repository.

    • Fixed thread safety for a variable involved in fetching from bucket storage for queries.

    • Updated the new asset dialog button text so that it will say 'Continue' when an asset will not be created directly.

    • Updated Elastic ingest endpoint to accept 'create' operations in addition to 'index' operations. Both operation types result in the same ingest behavior. This update was added as Fluent-Bit v1.8.3 began using the 'create' operation rather than 'index' for ingest.

    • Fixed an issue where Humio would create auxiliary files (hash files) for segments unnecessarily when moving segments between nodes.

    • Updated dependencies with security fixes.

    • The simple and advanced permission model has been merged, thus allowing users who were using the simple permission model to create their own permission roles and groups, create groups with default roles, and all other features that were previously only available in advanced permissions mode.

Humio Server 1.32.4 LTS (2021-12-10)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.32.4LTS2021-12-10

Cloud

2022-10-31No1.16.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.32.4/server-1.32.4.tar.gz

These notes include entries from the following previous releases: 1.32.0, 1.32.1, 1.32.2, 1.32.3

Security fix related to log4j logging, and fix compatibility with Filebeat.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • Deprecates the two GraphQL fields id and contentHash on the File type. The two fields are considered unused, so no alternatives are provided. If you rely on them, please let us know.

  • Deprecates the copyFile GraphQL mutation, as it is no longer used. If you use this mutation, please let us know.

New features and improvements

  • UI Changes

    • Updated the style of the email action template and made the wording used dependent on whether an alert or scheduled search was triggered.

    • Breadcrumbs are aligned across all pages and show the package name with a link when viewing or editing an asset from a package.

    • The left navigation menu hides, and can be opened again, for mobile devices, on organization settings pages and repository settings pages.

    • Cluster management pages style updates.

    • Fixed some styling issue on Query Quotas page.

    • The signup path was removed, together with the corresponding pages.

    • Updated design for Package Marketplace and Installed Packages to make them easier to use and more consistent.

    • Removed the pop-up link to edit an alert or scheduled search when on the form page. This link is only relevant when creating an entity from the search page via a dialog.

    • Identity provider pages style update.

  • GraphQL API

    • Added information about the use of preview fields in the result from calling the GraphQL API. The information will be in the field extensions.preview and will be a list of objects with a name and reason field.

    • The GraphQL DateTime type now supports non-UTC time. Timestamps like 2021-07-18T14:13:09.517+02.0 are now legal, and will be converted to UTC time internally.

    • When using the GraphQL field allowedViewActions, the two previously deprecated actions ChangeAlertsAndNotifiers and ReadEvents are no longer returned. Look for their replacements ChangeTriggersAndActions and ReadContents instead.

    • Deprecates the installPackageFromRegistry and updatePackageFromRegistry GraphQL mutations in favor of installPackageFromRegistryV2 and updatePackageFromRegistryV2.

    • The name, displayName, and location GraphQL fields on the File type are deprecated in favor of the new nameAndPath field.

    • The fileName, displayName, and location GraphQL fields on the UploadedFileSnapshot type are deprecated in favor of the new nameAndPath field.

    • Deprecates the package field on the SearchDomain GraphQL type, in favor of packageV2. The new field has a simpler and more correct return type.

    • Added a GraphQL mutation cancelDeleteEvents that allows cancelling a previously submitted deletion. Cancellation is best-effort, and events that have already been deleted will not be restored.

    • Extended 'Relative' field type for schema files to include support for the value 'now'.

  • Configuration

    • Added compatibility mode for using IBM Cloud Object Storage as bucket storage via S3_STORAGE_IBM_COMPAT

    • The Scheduled Searches feature is no longer in beta and can be used by all users without enabling it first

    • On a node configured as USING_EPHEMERAL_DISKS=true allow the local disk management deleting files even if a query may need them later, as the system is able to re-fetch the files from bucket storage when required. This improves the situation when there are active queries that in total have requested access to more segments than the local disk can hold.

  • Functions

  • Other

    • Added focus states to text field, selection and text area components.

    • Added support for importing packages with CSV and JSON files. Exporting packages with files is not fully supported yet, but will be in a future release.

    • Improved handling of local disk space relative to LOCAL_STORAGE_MIN_AGE_DAYS. When the local disk would overflow by respecting that config, Humio can now delete the oldest local segments that are present in bucket storage, even when they are within that time range.

    • Raise size limit on ingest requests from 8MB to 1 GB

    • Scheduled search "schedule" is explained using human readable text such as "At 9.30 on Tuesdays".

    • Improved search for users page.

    • Package installation error messages are now much more readable.

    • Limit pending ingest requests by rejecting excess invocations. Rejections are signalled as status 429 "Too many requests" and a Retry-After header suggesting to retry in 5 seconds. Limiting starts when queued requests exceed INGEST_REQUEST_LIMIT_PCT of the total heap size, default is 5.

    • Warnings when running scheduled searches now show up as errors in the scheduled search overview page if SCHEDULED_SEARCH_DESPITE_WARNINGS is set to false (the default).

    • Added a Data subprocessors page under account.

    • Improved audit log for organization creation.

    • Added maximum width to tabs on the Group page, so they do not keep expanding forever.

    • Humio docker images is now based on the Alpine linux.

    • New metric: "ingest-request-delay". Histogram of ingest request time spent being delayed due to exceeding limit on concurrent processing of ingest.

    • Added explicit distribution information for elastic bulk API for elasticsearch API compatibility.

    • Allow launching using JDK-16.

    • The test action functionality no longer uses alert terminology, as actions can be invoked from both alerts and scheduled searches. Also, it is now possible to also test the scheduled search specific message templates using it.

    • Improved error handling when running scheduled searches, so that a failed schedules search will be retried as long as it is within the Backfill Limit.

    • Added loading and error states to the page where user selects to create a new repository or view.

    • When selecting actions for alerts or scheduled searches, the actions are now grouped by the package they were imported from.

    • Fixed an issue with using the browser back button while "advanced editing" the query text of a scheduled search or an alert would hide the blue bar that allows saving the query.

    • Added support for including dashboard and alert labels when exporting a package.

    • Scheduled search "schedule" field is now validated, showing accurate help for each part of the crontab expression.

    • You can now export and import packages containing any of the action types: Webhook, Email, Humio Repo, Pager Duty, Slack, Slack multi channel, Ops Genie and Victor Ops.

    • Added Dark Mode for Query Monitor page.

Fixed in this release

  • Security

    • Updated dependencies to address a critical security vulnerability for the log4j logging framework, "log4shell", (CVE-2021-44228).

    • Fixed a compatibility issue with Filebeat 7.16.0

  • Summary

    • Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.

    • Fixed an issue that would result in a query not completing when one of the involved segments was deleted locally while the query was running. This could happen on clusters using bucket storage with more data than fits the local disks.

    • Security fix.

    • Removed a spurious warning log when requesting a non-existent hash file from S3.

    • Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.

    • It is now possible to ingest logs into Humio using LogStash v.7.13 and upwards.

    • Updated a dependency to a version fixing a critical bug.

  • Documentation

    • Updated the examples on how to use the match() query function in the online documentation.

  • Automation and Alerts

    • Fixed a bug which potentially have caused alerts to not re-fire after the throttle period for field-based throttling had passed.

  • Functions

    • Fixed an issue where top() with max= can yield the same key multiple times (for example ...| top([queryId, query], max=totalSize)).

    • Fixed an issue with the split() function which caused incorrect (usually, too few) query results in some cases where the output fields were refered to later in the query.

  • Other

    • Fixed an issue where the global consistency check job would fail to perform the consistency check, instead logging lines like "Global dump requested but global had expired". This line can still occur, but only when the consistency check takes too long.

    • Amended an internal limit on how many segments can be fetched from bucket storage concurrently. The old limit was based on the number of running queries. The new limit is 32.

    • Fixed an issue where, looking at GraphiQL, the dropdown from the navigation menu was partially hidden.

    • Fixed an issue that could cause cluster nodes to crash when growing the number of digest partitions.

    • Fixed an issue where new groups added to a repository got a query prefix that disallowed search. The default is now to allow search with the queryprefix *.

    • Fixed an issue that caused some errors to be hidden behind a message about "internal error".

    • Reworded a confusing error message when using the top() function with a limit parameter exceeding the limits configured with TOP_K_MAX_MAP_SIZE_HISTORICAL or TOP_K_MAX_MAP_SIZE_LIVE.

    • Fixed an issue that could cause UploadedFileSyncJob to crash if an uploaded file went missing.

    • Updated Slack action for messaging multiple channels, so it propagates errors when triggered. Previously errors were ignored.

    • Truncate long user names on the Users page.

    • Fixed a bug where a 404 Not Found status on an internal endpoint would be incorrectly reported as an 401 Unauthorized.

    • Fixed an issue where Humio would retain segments acquired from read-only buckets if those segments were deleted. Humio will now properly delete the segments locally, and drop the reference to the copy in the read-only bucket.

    • Global snapshots are now uploaded to bucket storage more often when there are a lot of updates to it, leading to shorter replay times on startup.

    • Introduced a check for compatibility for packages and humio versions.

    • Security when viewing installed packages and packages on the marketplace are now less strict. Permissions are still required for installing and uninstalling packages.

    • Fixed an issue where the DiskSpaceJob could continue tracking segments if they were deleted from global, but the files were still present locally.

    • Fixed an issue where certain problems highlighted the first word in a query, not the location of the problem.

    • Creating a new dashboard now opens it after creation.

    • Fixed an issue that caused some metrics of type gauge to be reported with a wrong value.

    • The DiskSpaceJob now removes newly written backfilled segments off the local disk before it chooses to remove non-backfilled segments.

    • Fixed an issue where the {time_zone} Message Templates and Variables for actions would show a full description of the scheduled search instead of only the time zone.

    • Fixed an issue - when creating a repository a user is automatically assigned a role but isn't able to see himself in the roles list. Also, when editing roles the assignment is not counted correctly under usage.

    • Fixed an issue where Humio attempted to fetch global from other nodes before TLS was initialized.

    • Fixed a bug on queries that triggered an error while executing due to the input (such as a regex that exceeds limits on execution time) could result in the client getting 404 as status on poll, where it should get .0.

    • Fixed an issue where Shift+Enter would select the current completion rather than adding a newline.

    • Removed an old Cloud Signups page. The page is not necessary since organizations were implemented for the Cloud environments.

    • Fixed an issue where the DiskSpaceJob could mark segments accessed slightly out of order during boot.

    • Fixed an issue where it was possible to submit queries to the Delete Events API that were not valid for that API. Only pure filtering queries are allowed.

    • When a search is able to filter out segments based on the hash filter files, and a segment file is not present locally on any node, fetch only the hash filter at first, evaluate that, and only if required, fetch the segment file. This speeds up searches that target segments only present in bucket storage and that have search filters that generate hash filter checks, such as regex and literal text comparisons.

    • Fixed a bug where a hidden field named "#humioAutoShard" would sometimes show up in the field list.

    • Split package export page into dialog with multiple steps.

    • Fixed an issue where the job responsible for deleting segment files off nodes was not deleting as many segments as it should.

    • When accessing Humio through a URL with either a repository or view name in it and using an ingest token, it is now checked that the view on the token matches the repository or view in the URL, and a 403 Forbidden status is returned, if not.

    • Fixed an issue where Humio would create a broken hash file for the merge result when merging mini-segments that did not originally have hash files.

    • The DiskSpaceJob no longer initializes based off of the segment last-modified timestamp, this only happens if no access order snapshot is stored locally. If a snapshot is present, we trust that.

    • Fixed a bug causing the disk space job to use an expensive code path even when a cheaper one was available.

    • Fixed an issue where the job responsible for deleting segment files off nodes was not running as often as expected.

    • Cloning an asset now redirects you to the edit page for the asset for all assets.

    • Fixed an issue where the query scheduler would spend too much time "shelving" queries, and not enough on getting them executed, leading to little progress on queries.

    • Fixed an issue where metrics of type gauge with a double value were not reported to the humio-metricsrepository, but only to the humio repository.

    • Fixed thread safety for a variable involved in fetching from bucket storage for queries.

    • Updated the new asset dialog button text so that it will say 'Continue' when an asset will not be created directly.

    • Updated Elastic ingest endpoint to accept 'create' operations in addition to 'index' operations. Both operation types result in the same ingest behavior. This update was added as Fluent-Bit v1.8.3 began using the 'create' operation rather than 'index' for ingest.

    • Fixed an issue where Humio would create auxiliary files (hash files) for segments unnecessarily when moving segments between nodes.

    • Updated dependencies with security fixes.

    • The simple and advanced permission model has been merged, thus allowing users who were using the simple permission model to create their own permission roles and groups, create groups with default roles, and all other features that were previously only available in advanced permissions mode.

Humio Server 1.32.3 LTS (2021-12-01)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.32.3LTS2021-12-01

Cloud

2022-10-31No1.16.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.32.3/server-1.32.3.tar.gz

These notes include entries from the following previous releases: 1.32.0, 1.32.1, 1.32.2

Bug fix to resolve problem with clusters using bucket storage.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • Deprecates the two GraphQL fields id and contentHash on the File type. The two fields are considered unused, so no alternatives are provided. If you rely on them, please let us know.

  • Deprecates the copyFile GraphQL mutation, as it is no longer used. If you use this mutation, please let us know.

New features and improvements

  • UI Changes

    • Updated the style of the email action template and made the wording used dependent on whether an alert or scheduled search was triggered.

    • Breadcrumbs are aligned across all pages and show the package name with a link when viewing or editing an asset from a package.

    • The left navigation menu hides, and can be opened again, for mobile devices, on organization settings pages and repository settings pages.

    • Cluster management pages style updates.

    • Fixed some styling issue on Query Quotas page.

    • The signup path was removed, together with the corresponding pages.

    • Updated design for Package Marketplace and Installed Packages to make them easier to use and more consistent.

    • Removed the pop-up link to edit an alert or scheduled search when on the form page. This link is only relevant when creating an entity from the search page via a dialog.

    • Identity provider pages style update.

  • GraphQL API

    • Added information about the use of preview fields in the result from calling the GraphQL API. The information will be in the field extensions.preview and will be a list of objects with a name and reason field.

    • The GraphQL DateTime type now supports non-UTC time. Timestamps like 2021-07-18T14:13:09.517+02.0 are now legal, and will be converted to UTC time internally.

    • When using the GraphQL field allowedViewActions, the two previously deprecated actions ChangeAlertsAndNotifiers and ReadEvents are no longer returned. Look for their replacements ChangeTriggersAndActions and ReadContents instead.

    • Deprecates the installPackageFromRegistry and updatePackageFromRegistry GraphQL mutations in favor of installPackageFromRegistryV2 and updatePackageFromRegistryV2.

    • The name, displayName, and location GraphQL fields on the File type are deprecated in favor of the new nameAndPath field.

    • The fileName, displayName, and location GraphQL fields on the UploadedFileSnapshot type are deprecated in favor of the new nameAndPath field.

    • Deprecates the package field on the SearchDomain GraphQL type, in favor of packageV2. The new field has a simpler and more correct return type.

    • Added a GraphQL mutation cancelDeleteEvents that allows cancelling a previously submitted deletion. Cancellation is best-effort, and events that have already been deleted will not be restored.

    • Extended 'Relative' field type for schema files to include support for the value 'now'.

  • Configuration

    • Added compatibility mode for using IBM Cloud Object Storage as bucket storage via S3_STORAGE_IBM_COMPAT

    • The Scheduled Searches feature is no longer in beta and can be used by all users without enabling it first

    • On a node configured as USING_EPHEMERAL_DISKS=true allow the local disk management deleting files even if a query may need them later, as the system is able to re-fetch the files from bucket storage when required. This improves the situation when there are active queries that in total have requested access to more segments than the local disk can hold.

  • Functions

  • Other

    • Added focus states to text field, selection and text area components.

    • Added support for importing packages with CSV and JSON files. Exporting packages with files is not fully supported yet, but will be in a future release.

    • Improved handling of local disk space relative to LOCAL_STORAGE_MIN_AGE_DAYS. When the local disk would overflow by respecting that config, Humio can now delete the oldest local segments that are present in bucket storage, even when they are within that time range.

    • Raise size limit on ingest requests from 8MB to 1 GB

    • Scheduled search "schedule" is explained using human readable text such as "At 9.30 on Tuesdays".

    • Improved search for users page.

    • Package installation error messages are now much more readable.

    • Limit pending ingest requests by rejecting excess invocations. Rejections are signalled as status 429 "Too many requests" and a Retry-After header suggesting to retry in 5 seconds. Limiting starts when queued requests exceed INGEST_REQUEST_LIMIT_PCT of the total heap size, default is 5.

    • Warnings when running scheduled searches now show up as errors in the scheduled search overview page if SCHEDULED_SEARCH_DESPITE_WARNINGS is set to false (the default).

    • Added a Data subprocessors page under account.

    • Improved audit log for organization creation.

    • Added maximum width to tabs on the Group page, so they do not keep expanding forever.

    • Humio docker images is now based on the Alpine linux.

    • New metric: "ingest-request-delay". Histogram of ingest request time spent being delayed due to exceeding limit on concurrent processing of ingest.

    • Added explicit distribution information for elastic bulk API for elasticsearch API compatibility.

    • Allow launching using JDK-16.

    • The test action functionality no longer uses alert terminology, as actions can be invoked from both alerts and scheduled searches. Also, it is now possible to also test the scheduled search specific message templates using it.

    • Improved error handling when running scheduled searches, so that a failed schedules search will be retried as long as it is within the Backfill Limit.

    • Added loading and error states to the page where user selects to create a new repository or view.

    • When selecting actions for alerts or scheduled searches, the actions are now grouped by the package they were imported from.

    • Fixed an issue with using the browser back button while "advanced editing" the query text of a scheduled search or an alert would hide the blue bar that allows saving the query.

    • Added support for including dashboard and alert labels when exporting a package.

    • Scheduled search "schedule" field is now validated, showing accurate help for each part of the crontab expression.

    • You can now export and import packages containing any of the action types: Webhook, Email, Humio Repo, Pager Duty, Slack, Slack multi channel, Ops Genie and Victor Ops.

    • Added Dark Mode for Query Monitor page.

Fixed in this release

  • Summary

    • Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.

    • Fixed an issue that would result in a query not completing when one of the involved segments was deleted locally while the query was running. This could happen on clusters using bucket storage with more data than fits the local disks.

    • Security fix.

    • Removed a spurious warning log when requesting a non-existent hash file from S3.

    • Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.

    • It is now possible to ingest logs into Humio using LogStash v.7.13 and upwards.

    • Updated a dependency to a version fixing a critical bug.

  • Documentation

    • Updated the examples on how to use the match() query function in the online documentation.

  • Automation and Alerts

    • Fixed a bug which potentially have caused alerts to not re-fire after the throttle period for field-based throttling had passed.

  • Functions

    • Fixed an issue where top() with max= can yield the same key multiple times (for example ...| top([queryId, query], max=totalSize)).

    • Fixed an issue with the split() function which caused incorrect (usually, too few) query results in some cases where the output fields were refered to later in the query.

  • Other

    • Fixed an issue where the global consistency check job would fail to perform the consistency check, instead logging lines like "Global dump requested but global had expired". This line can still occur, but only when the consistency check takes too long.

    • Amended an internal limit on how many segments can be fetched from bucket storage concurrently. The old limit was based on the number of running queries. The new limit is 32.

    • Fixed an issue where, looking at GraphiQL, the dropdown from the navigation menu was partially hidden.

    • Fixed an issue that could cause cluster nodes to crash when growing the number of digest partitions.

    • Fixed an issue where new groups added to a repository got a query prefix that disallowed search. The default is now to allow search with the queryprefix *.

    • Fixed an issue that caused some errors to be hidden behind a message about "internal error".

    • Reworded a confusing error message when using the top() function with a limit parameter exceeding the limits configured with TOP_K_MAX_MAP_SIZE_HISTORICAL or TOP_K_MAX_MAP_SIZE_LIVE.

    • Fixed an issue that could cause UploadedFileSyncJob to crash if an uploaded file went missing.

    • Updated Slack action for messaging multiple channels, so it propagates errors when triggered. Previously errors were ignored.

    • Truncate long user names on the Users page.

    • Fixed a bug where a 404 Not Found status on an internal endpoint would be incorrectly reported as an 401 Unauthorized.

    • Fixed an issue where Humio would retain segments acquired from read-only buckets if those segments were deleted. Humio will now properly delete the segments locally, and drop the reference to the copy in the read-only bucket.

    • Global snapshots are now uploaded to bucket storage more often when there are a lot of updates to it, leading to shorter replay times on startup.

    • Introduced a check for compatibility for packages and humio versions.

    • Security when viewing installed packages and packages on the marketplace are now less strict. Permissions are still required for installing and uninstalling packages.

    • Fixed an issue where the DiskSpaceJob could continue tracking segments if they were deleted from global, but the files were still present locally.

    • Fixed an issue where certain problems highlighted the first word in a query, not the location of the problem.

    • Creating a new dashboard now opens it after creation.

    • Fixed an issue that caused some metrics of type gauge to be reported with a wrong value.

    • The DiskSpaceJob now removes newly written backfilled segments off the local disk before it chooses to remove non-backfilled segments.

    • Fixed an issue where the {time_zone} Message Templates and Variables for actions would show a full description of the scheduled search instead of only the time zone.

    • Fixed an issue - when creating a repository a user is automatically assigned a role but isn't able to see himself in the roles list. Also, when editing roles the assignment is not counted correctly under usage.

    • Fixed an issue where Humio attempted to fetch global from other nodes before TLS was initialized.

    • Fixed a bug on queries that triggered an error while executing due to the input (such as a regex that exceeds limits on execution time) could result in the client getting 404 as status on poll, where it should get .0.

    • Fixed an issue where Shift+Enter would select the current completion rather than adding a newline.

    • Removed an old Cloud Signups page. The page is not necessary since organizations were implemented for the Cloud environments.

    • Fixed an issue where the DiskSpaceJob could mark segments accessed slightly out of order during boot.

    • Fixed an issue where it was possible to submit queries to the Delete Events API that were not valid for that API. Only pure filtering queries are allowed.

    • When a search is able to filter out segments based on the hash filter files, and a segment file is not present locally on any node, fetch only the hash filter at first, evaluate that, and only if required, fetch the segment file. This speeds up searches that target segments only present in bucket storage and that have search filters that generate hash filter checks, such as regex and literal text comparisons.

    • Fixed a bug where a hidden field named "#humioAutoShard" would sometimes show up in the field list.

    • Split package export page into dialog with multiple steps.

    • Fixed an issue where the job responsible for deleting segment files off nodes was not deleting as many segments as it should.

    • When accessing Humio through a URL with either a repository or view name in it and using an ingest token, it is now checked that the view on the token matches the repository or view in the URL, and a 403 Forbidden status is returned, if not.

    • Fixed an issue where Humio would create a broken hash file for the merge result when merging mini-segments that did not originally have hash files.

    • The DiskSpaceJob no longer initializes based off of the segment last-modified timestamp, this only happens if no access order snapshot is stored locally. If a snapshot is present, we trust that.

    • Fixed a bug causing the disk space job to use an expensive code path even when a cheaper one was available.

    • Fixed an issue where the job responsible for deleting segment files off nodes was not running as often as expected.

    • Cloning an asset now redirects you to the edit page for the asset for all assets.

    • Fixed an issue where the query scheduler would spend too much time "shelving" queries, and not enough on getting them executed, leading to little progress on queries.

    • Fixed an issue where metrics of type gauge with a double value were not reported to the humio-metricsrepository, but only to the humio repository.

    • Fixed thread safety for a variable involved in fetching from bucket storage for queries.

    • Updated the new asset dialog button text so that it will say 'Continue' when an asset will not be created directly.

    • Updated Elastic ingest endpoint to accept 'create' operations in addition to 'index' operations. Both operation types result in the same ingest behavior. This update was added as Fluent-Bit v1.8.3 began using the 'create' operation rather than 'index' for ingest.

    • Fixed an issue where Humio would create auxiliary files (hash files) for segments unnecessarily when moving segments between nodes.

    • Updated dependencies with security fixes.

    • The simple and advanced permission model has been merged, thus allowing users who were using the simple permission model to create their own permission roles and groups, create groups with default roles, and all other features that were previously only available in advanced permissions mode.

Humio Server 1.32.2 LTS (2021-11-19)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.32.2LTS2021-11-19

Cloud

2022-10-31No1.16.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.32.2/server-1.32.2.tar.gz

These notes include entries from the following previous releases: 1.32.0, 1.32.1

Critical bug fix regarding version dependency, and race conditions.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • Deprecates the two GraphQL fields id and contentHash on the File type. The two fields are considered unused, so no alternatives are provided. If you rely on them, please let us know.

  • Deprecates the copyFile GraphQL mutation, as it is no longer used. If you use this mutation, please let us know.

New features and improvements

  • UI Changes

    • Updated the style of the email action template and made the wording used dependent on whether an alert or scheduled search was triggered.

    • Breadcrumbs are aligned across all pages and show the package name with a link when viewing or editing an asset from a package.

    • The left navigation menu hides, and can be opened again, for mobile devices, on organization settings pages and repository settings pages.

    • Cluster management pages style updates.

    • Fixed some styling issue on Query Quotas page.

    • The signup path was removed, together with the corresponding pages.

    • Updated design for Package Marketplace and Installed Packages to make them easier to use and more consistent.

    • Removed the pop-up link to edit an alert or scheduled search when on the form page. This link is only relevant when creating an entity from the search page via a dialog.

    • Identity provider pages style update.

  • GraphQL API

    • Added information about the use of preview fields in the result from calling the GraphQL API. The information will be in the field extensions.preview and will be a list of objects with a name and reason field.

    • The GraphQL DateTime type now supports non-UTC time. Timestamps like 2021-07-18T14:13:09.517+02.0 are now legal, and will be converted to UTC time internally.

    • When using the GraphQL field allowedViewActions, the two previously deprecated actions ChangeAlertsAndNotifiers and ReadEvents are no longer returned. Look for their replacements ChangeTriggersAndActions and ReadContents instead.

    • Deprecates the installPackageFromRegistry and updatePackageFromRegistry GraphQL mutations in favor of installPackageFromRegistryV2 and updatePackageFromRegistryV2.

    • The name, displayName, and location GraphQL fields on the File type are deprecated in favor of the new nameAndPath field.

    • The fileName, displayName, and location GraphQL fields on the UploadedFileSnapshot type are deprecated in favor of the new nameAndPath field.

    • Deprecates the package field on the SearchDomain GraphQL type, in favor of packageV2. The new field has a simpler and more correct return type.

    • Added a GraphQL mutation cancelDeleteEvents that allows cancelling a previously submitted deletion. Cancellation is best-effort, and events that have already been deleted will not be restored.

    • Extended 'Relative' field type for schema files to include support for the value 'now'.

  • Configuration

    • Added compatibility mode for using IBM Cloud Object Storage as bucket storage via S3_STORAGE_IBM_COMPAT

    • The Scheduled Searches feature is no longer in beta and can be used by all users without enabling it first

    • On a node configured as USING_EPHEMERAL_DISKS=true allow the local disk management deleting files even if a query may need them later, as the system is able to re-fetch the files from bucket storage when required. This improves the situation when there are active queries that in total have requested access to more segments than the local disk can hold.

  • Functions

  • Other

    • Added focus states to text field, selection and text area components.

    • Added support for importing packages with CSV and JSON files. Exporting packages with files is not fully supported yet, but will be in a future release.

    • Improved handling of local disk space relative to LOCAL_STORAGE_MIN_AGE_DAYS. When the local disk would overflow by respecting that config, Humio can now delete the oldest local segments that are present in bucket storage, even when they are within that time range.

    • Raise size limit on ingest requests from 8MB to 1 GB

    • Scheduled search "schedule" is explained using human readable text such as "At 9.30 on Tuesdays".

    • Improved search for users page.

    • Package installation error messages are now much more readable.

    • Limit pending ingest requests by rejecting excess invocations. Rejections are signalled as status 429 "Too many requests" and a Retry-After header suggesting to retry in 5 seconds. Limiting starts when queued requests exceed INGEST_REQUEST_LIMIT_PCT of the total heap size, default is 5.

    • Warnings when running scheduled searches now show up as errors in the scheduled search overview page if SCHEDULED_SEARCH_DESPITE_WARNINGS is set to false (the default).

    • Added a Data subprocessors page under account.

    • Improved audit log for organization creation.

    • Added maximum width to tabs on the Group page, so they do not keep expanding forever.

    • Humio docker images is now based on the Alpine linux.

    • New metric: "ingest-request-delay". Histogram of ingest request time spent being delayed due to exceeding limit on concurrent processing of ingest.

    • Added explicit distribution information for elastic bulk API for elasticsearch API compatibility.

    • Allow launching using JDK-16.

    • The test action functionality no longer uses alert terminology, as actions can be invoked from both alerts and scheduled searches. Also, it is now possible to also test the scheduled search specific message templates using it.

    • Improved error handling when running scheduled searches, so that a failed schedules search will be retried as long as it is within the Backfill Limit.

    • Added loading and error states to the page where user selects to create a new repository or view.

    • When selecting actions for alerts or scheduled searches, the actions are now grouped by the package they were imported from.

    • Fixed an issue with using the browser back button while "advanced editing" the query text of a scheduled search or an alert would hide the blue bar that allows saving the query.

    • Added support for including dashboard and alert labels when exporting a package.

    • Scheduled search "schedule" field is now validated, showing accurate help for each part of the crontab expression.

    • You can now export and import packages containing any of the action types: Webhook, Email, Humio Repo, Pager Duty, Slack, Slack multi channel, Ops Genie and Victor Ops.

    • Added Dark Mode for Query Monitor page.

Fixed in this release

  • Summary

    • Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.

    • Security fix.

    • Removed a spurious warning log when requesting a non-existent hash file from S3.

    • Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.

    • It is now possible to ingest logs into Humio using LogStash v.7.13 and upwards.

    • Updated a dependency to a version fixing a critical bug.

  • Documentation

    • Updated the examples on how to use the match() query function in the online documentation.

  • Automation and Alerts

    • Fixed a bug which potentially have caused alerts to not re-fire after the throttle period for field-based throttling had passed.

  • Functions

    • Fixed an issue where top() with max= can yield the same key multiple times (for example ...| top([queryId, query], max=totalSize)).

    • Fixed an issue with the split() function which caused incorrect (usually, too few) query results in some cases where the output fields were refered to later in the query.

  • Other

    • Fixed an issue where the global consistency check job would fail to perform the consistency check, instead logging lines like "Global dump requested but global had expired". This line can still occur, but only when the consistency check takes too long.

    • Amended an internal limit on how many segments can be fetched from bucket storage concurrently. The old limit was based on the number of running queries. The new limit is 32.

    • Fixed an issue where, looking at GraphiQL, the dropdown from the navigation menu was partially hidden.

    • Fixed an issue that could cause cluster nodes to crash when growing the number of digest partitions.

    • Fixed an issue where new groups added to a repository got a query prefix that disallowed search. The default is now to allow search with the queryprefix *.

    • Fixed an issue that caused some errors to be hidden behind a message about "internal error".

    • Reworded a confusing error message when using the top() function with a limit parameter exceeding the limits configured with TOP_K_MAX_MAP_SIZE_HISTORICAL or TOP_K_MAX_MAP_SIZE_LIVE.

    • Fixed an issue that could cause UploadedFileSyncJob to crash if an uploaded file went missing.

    • Updated Slack action for messaging multiple channels, so it propagates errors when triggered. Previously errors were ignored.

    • Truncate long user names on the Users page.

    • Fixed a bug where a 404 Not Found status on an internal endpoint would be incorrectly reported as an 401 Unauthorized.

    • Fixed an issue where Humio would retain segments acquired from read-only buckets if those segments were deleted. Humio will now properly delete the segments locally, and drop the reference to the copy in the read-only bucket.

    • Global snapshots are now uploaded to bucket storage more often when there are a lot of updates to it, leading to shorter replay times on startup.

    • Introduced a check for compatibility for packages and humio versions.

    • Security when viewing installed packages and packages on the marketplace are now less strict. Permissions are still required for installing and uninstalling packages.

    • Fixed an issue where the DiskSpaceJob could continue tracking segments if they were deleted from global, but the files were still present locally.

    • Fixed an issue where certain problems highlighted the first word in a query, not the location of the problem.

    • Creating a new dashboard now opens it after creation.

    • Fixed an issue that caused some metrics of type gauge to be reported with a wrong value.

    • The DiskSpaceJob now removes newly written backfilled segments off the local disk before it chooses to remove non-backfilled segments.

    • Fixed an issue where the {time_zone} Message Templates and Variables for actions would show a full description of the scheduled search instead of only the time zone.

    • Fixed an issue - when creating a repository a user is automatically assigned a role but isn't able to see himself in the roles list. Also, when editing roles the assignment is not counted correctly under usage.

    • Fixed an issue where Humio attempted to fetch global from other nodes before TLS was initialized.

    • Fixed a bug on queries that triggered an error while executing due to the input (such as a regex that exceeds limits on execution time) could result in the client getting 404 as status on poll, where it should get .0.

    • Fixed an issue where Shift+Enter would select the current completion rather than adding a newline.

    • Removed an old Cloud Signups page. The page is not necessary since organizations were implemented for the Cloud environments.

    • Fixed an issue where the DiskSpaceJob could mark segments accessed slightly out of order during boot.

    • Fixed an issue where it was possible to submit queries to the Delete Events API that were not valid for that API. Only pure filtering queries are allowed.

    • When a search is able to filter out segments based on the hash filter files, and a segment file is not present locally on any node, fetch only the hash filter at first, evaluate that, and only if required, fetch the segment file. This speeds up searches that target segments only present in bucket storage and that have search filters that generate hash filter checks, such as regex and literal text comparisons.

    • Fixed a bug where a hidden field named "#humioAutoShard" would sometimes show up in the field list.

    • Split package export page into dialog with multiple steps.

    • Fixed an issue where the job responsible for deleting segment files off nodes was not deleting as many segments as it should.

    • When accessing Humio through a URL with either a repository or view name in it and using an ingest token, it is now checked that the view on the token matches the repository or view in the URL, and a 403 Forbidden status is returned, if not.

    • Fixed an issue where Humio would create a broken hash file for the merge result when merging mini-segments that did not originally have hash files.

    • The DiskSpaceJob no longer initializes based off of the segment last-modified timestamp, this only happens if no access order snapshot is stored locally. If a snapshot is present, we trust that.

    • Fixed a bug causing the disk space job to use an expensive code path even when a cheaper one was available.

    • Fixed an issue where the job responsible for deleting segment files off nodes was not running as often as expected.

    • Cloning an asset now redirects you to the edit page for the asset for all assets.

    • Fixed an issue where the query scheduler would spend too much time "shelving" queries, and not enough on getting them executed, leading to little progress on queries.

    • Fixed an issue where metrics of type gauge with a double value were not reported to the humio-metricsrepository, but only to the humio repository.

    • Fixed thread safety for a variable involved in fetching from bucket storage for queries.

    • Updated the new asset dialog button text so that it will say 'Continue' when an asset will not be created directly.

    • Updated Elastic ingest endpoint to accept 'create' operations in addition to 'index' operations. Both operation types result in the same ingest behavior. This update was added as Fluent-Bit v1.8.3 began using the 'create' operation rather than 'index' for ingest.

    • Fixed an issue where Humio would create auxiliary files (hash files) for segments unnecessarily when moving segments between nodes.

    • Updated dependencies with security fixes.

    • The simple and advanced permission model has been merged, thus allowing users who were using the simple permission model to create their own permission roles and groups, create groups with default roles, and all other features that were previously only available in advanced permissions mode.

Humio Server 1.32.1 LTS (2021-11-16)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.32.1LTS2021-11-16

Cloud

2022-10-31No1.16.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.32.1/server-1.32.1.tar.gz

These notes include entries from the following previous releases: 1.32.0

Bug fixes related to Amazon S3 log entries, saving a User Interface theme, Logtash, and general security.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • Deprecates the two GraphQL fields id and contentHash on the File type. The two fields are considered unused, so no alternatives are provided. If you rely on them, please let us know.

  • Deprecates the copyFile GraphQL mutation, as it is no longer used. If you use this mutation, please let us know.

New features and improvements

  • UI Changes

    • Updated the style of the email action template and made the wording used dependent on whether an alert or scheduled search was triggered.

    • Breadcrumbs are aligned across all pages and show the package name with a link when viewing or editing an asset from a package.

    • The left navigation menu hides, and can be opened again, for mobile devices, on organization settings pages and repository settings pages.

    • Cluster management pages style updates.

    • Fixed some styling issue on Query Quotas page.

    • The signup path was removed, together with the corresponding pages.

    • Updated design for Package Marketplace and Installed Packages to make them easier to use and more consistent.

    • Removed the pop-up link to edit an alert or scheduled search when on the form page. This link is only relevant when creating an entity from the search page via a dialog.

    • Identity provider pages style update.

  • GraphQL API

    • Added information about the use of preview fields in the result from calling the GraphQL API. The information will be in the field extensions.preview and will be a list of objects with a name and reason field.

    • The GraphQL DateTime type now supports non-UTC time. Timestamps like 2021-07-18T14:13:09.517+02.0 are now legal, and will be converted to UTC time internally.

    • When using the GraphQL field allowedViewActions, the two previously deprecated actions ChangeAlertsAndNotifiers and ReadEvents are no longer returned. Look for their replacements ChangeTriggersAndActions and ReadContents instead.

    • Deprecates the installPackageFromRegistry and updatePackageFromRegistry GraphQL mutations in favor of installPackageFromRegistryV2 and updatePackageFromRegistryV2.

    • The name, displayName, and location GraphQL fields on the File type are deprecated in favor of the new nameAndPath field.

    • The fileName, displayName, and location GraphQL fields on the UploadedFileSnapshot type are deprecated in favor of the new nameAndPath field.

    • Deprecates the package field on the SearchDomain GraphQL type, in favor of packageV2. The new field has a simpler and more correct return type.

    • Added a GraphQL mutation cancelDeleteEvents that allows cancelling a previously submitted deletion. Cancellation is best-effort, and events that have already been deleted will not be restored.

    • Extended 'Relative' field type for schema files to include support for the value 'now'.

  • Configuration

    • Added compatibility mode for using IBM Cloud Object Storage as bucket storage via S3_STORAGE_IBM_COMPAT

    • The Scheduled Searches feature is no longer in beta and can be used by all users without enabling it first

    • On a node configured as USING_EPHEMERAL_DISKS=true allow the local disk management deleting files even if a query may need them later, as the system is able to re-fetch the files from bucket storage when required. This improves the situation when there are active queries that in total have requested access to more segments than the local disk can hold.

  • Functions

  • Other

    • Added focus states to text field, selection and text area components.

    • Added support for importing packages with CSV and JSON files. Exporting packages with files is not fully supported yet, but will be in a future release.

    • Improved handling of local disk space relative to LOCAL_STORAGE_MIN_AGE_DAYS. When the local disk would overflow by respecting that config, Humio can now delete the oldest local segments that are present in bucket storage, even when they are within that time range.

    • Raise size limit on ingest requests from 8MB to 1 GB

    • Scheduled search "schedule" is explained using human readable text such as "At 9.30 on Tuesdays".

    • Improved search for users page.

    • Package installation error messages are now much more readable.

    • Limit pending ingest requests by rejecting excess invocations. Rejections are signalled as status 429 "Too many requests" and a Retry-After header suggesting to retry in 5 seconds. Limiting starts when queued requests exceed INGEST_REQUEST_LIMIT_PCT of the total heap size, default is 5.

    • Warnings when running scheduled searches now show up as errors in the scheduled search overview page if SCHEDULED_SEARCH_DESPITE_WARNINGS is set to false (the default).

    • Added a Data subprocessors page under account.

    • Improved audit log for organization creation.

    • Added maximum width to tabs on the Group page, so they do not keep expanding forever.

    • Humio docker images is now based on the Alpine linux.

    • New metric: "ingest-request-delay". Histogram of ingest request time spent being delayed due to exceeding limit on concurrent processing of ingest.

    • Added explicit distribution information for elastic bulk API for elasticsearch API compatibility.

    • Allow launching using JDK-16.

    • The test action functionality no longer uses alert terminology, as actions can be invoked from both alerts and scheduled searches. Also, it is now possible to also test the scheduled search specific message templates using it.

    • Improved error handling when running scheduled searches, so that a failed schedules search will be retried as long as it is within the Backfill Limit.

    • Added loading and error states to the page where user selects to create a new repository or view.

    • When selecting actions for alerts or scheduled searches, the actions are now grouped by the package they were imported from.

    • Fixed an issue with using the browser back button while "advanced editing" the query text of a scheduled search or an alert would hide the blue bar that allows saving the query.

    • Added support for including dashboard and alert labels when exporting a package.

    • Scheduled search "schedule" field is now validated, showing accurate help for each part of the crontab expression.

    • You can now export and import packages containing any of the action types: Webhook, Email, Humio Repo, Pager Duty, Slack, Slack multi channel, Ops Genie and Victor Ops.

    • Added Dark Mode for Query Monitor page.

Fixed in this release

  • Summary

    • Security fix.

    • Removed a spurious warning log when requesting a non-existent hash file from S3.

    • Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.

    • It is now possible to ingest logs into Humio using LogStash v.7.13 and upwards.

  • Documentation

    • Updated the examples on how to use the match() query function in the online documentation.

  • Automation and Alerts

    • Fixed a bug which potentially have caused alerts to not re-fire after the throttle period for field-based throttling had passed.

  • Functions

    • Fixed an issue where top() with max= can yield the same key multiple times (for example ...| top([queryId, query], max=totalSize)).

    • Fixed an issue with the split() function which caused incorrect (usually, too few) query results in some cases where the output fields were refered to later in the query.

  • Other

    • Fixed an issue where the global consistency check job would fail to perform the consistency check, instead logging lines like "Global dump requested but global had expired". This line can still occur, but only when the consistency check takes too long.

    • Amended an internal limit on how many segments can be fetched from bucket storage concurrently. The old limit was based on the number of running queries. The new limit is 32.

    • Fixed an issue where, looking at GraphiQL, the dropdown from the navigation menu was partially hidden.

    • Fixed an issue that could cause cluster nodes to crash when growing the number of digest partitions.

    • Fixed an issue where new groups added to a repository got a query prefix that disallowed search. The default is now to allow search with the queryprefix *.

    • Fixed an issue that caused some errors to be hidden behind a message about "internal error".

    • Reworded a confusing error message when using the top() function with a limit parameter exceeding the limits configured with TOP_K_MAX_MAP_SIZE_HISTORICAL or TOP_K_MAX_MAP_SIZE_LIVE.

    • Fixed an issue that could cause UploadedFileSyncJob to crash if an uploaded file went missing.

    • Updated Slack action for messaging multiple channels, so it propagates errors when triggered. Previously errors were ignored.

    • Truncate long user names on the Users page.

    • Fixed a bug where a 404 Not Found status on an internal endpoint would be incorrectly reported as an 401 Unauthorized.

    • Fixed an issue where Humio would retain segments acquired from read-only buckets if those segments were deleted. Humio will now properly delete the segments locally, and drop the reference to the copy in the read-only bucket.

    • Global snapshots are now uploaded to bucket storage more often when there are a lot of updates to it, leading to shorter replay times on startup.

    • Introduced a check for compatibility for packages and humio versions.

    • Security when viewing installed packages and packages on the marketplace are now less strict. Permissions are still required for installing and uninstalling packages.

    • Fixed an issue where the DiskSpaceJob could continue tracking segments if they were deleted from global, but the files were still present locally.

    • Fixed an issue where certain problems highlighted the first word in a query, not the location of the problem.

    • Creating a new dashboard now opens it after creation.

    • Fixed an issue that caused some metrics of type gauge to be reported with a wrong value.

    • The DiskSpaceJob now removes newly written backfilled segments off the local disk before it chooses to remove non-backfilled segments.

    • Fixed an issue where the {time_zone} Message Templates and Variables for actions would show a full description of the scheduled search instead of only the time zone.

    • Fixed an issue - when creating a repository a user is automatically assigned a role but isn't able to see himself in the roles list. Also, when editing roles the assignment is not counted correctly under usage.

    • Fixed an issue where Humio attempted to fetch global from other nodes before TLS was initialized.

    • Fixed a bug on queries that triggered an error while executing due to the input (such as a regex that exceeds limits on execution time) could result in the client getting 404 as status on poll, where it should get .0.

    • Fixed an issue where Shift+Enter would select the current completion rather than adding a newline.

    • Removed an old Cloud Signups page. The page is not necessary since organizations were implemented for the Cloud environments.

    • Fixed an issue where the DiskSpaceJob could mark segments accessed slightly out of order during boot.

    • Fixed an issue where it was possible to submit queries to the Delete Events API that were not valid for that API. Only pure filtering queries are allowed.

    • When a search is able to filter out segments based on the hash filter files, and a segment file is not present locally on any node, fetch only the hash filter at first, evaluate that, and only if required, fetch the segment file. This speeds up searches that target segments only present in bucket storage and that have search filters that generate hash filter checks, such as regex and literal text comparisons.

    • Fixed a bug where a hidden field named "#humioAutoShard" would sometimes show up in the field list.

    • Split package export page into dialog with multiple steps.

    • Fixed an issue where the job responsible for deleting segment files off nodes was not deleting as many segments as it should.

    • When accessing Humio through a URL with either a repository or view name in it and using an ingest token, it is now checked that the view on the token matches the repository or view in the URL, and a 403 Forbidden status is returned, if not.

    • Fixed an issue where Humio would create a broken hash file for the merge result when merging mini-segments that did not originally have hash files.

    • The DiskSpaceJob no longer initializes based off of the segment last-modified timestamp, this only happens if no access order snapshot is stored locally. If a snapshot is present, we trust that.

    • Fixed a bug causing the disk space job to use an expensive code path even when a cheaper one was available.

    • Fixed an issue where the job responsible for deleting segment files off nodes was not running as often as expected.

    • Cloning an asset now redirects you to the edit page for the asset for all assets.

    • Fixed an issue where the query scheduler would spend too much time "shelving" queries, and not enough on getting them executed, leading to little progress on queries.

    • Fixed an issue where metrics of type gauge with a double value were not reported to the humio-metricsrepository, but only to the humio repository.

    • Fixed thread safety for a variable involved in fetching from bucket storage for queries.

    • Updated the new asset dialog button text so that it will say 'Continue' when an asset will not be created directly.

    • Updated Elastic ingest endpoint to accept 'create' operations in addition to 'index' operations. Both operation types result in the same ingest behavior. This update was added as Fluent-Bit v1.8.3 began using the 'create' operation rather than 'index' for ingest.

    • Fixed an issue where Humio would create auxiliary files (hash files) for segments unnecessarily when moving segments between nodes.

    • Updated dependencies with security fixes.

    • The simple and advanced permission model has been merged, thus allowing users who were using the simple permission model to create their own permission roles and groups, create groups with default roles, and all other features that were previously only available in advanced permissions mode.

Humio Server 1.32.0 LTS (2021-10-26)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.32.0LTS2021-10-26

Cloud

2022-10-31No1.16.0Yes

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.32.0/server-1.32.0.tar.gz

We now distribute Humio as a tarball in addition to the fat jar format we've previously used. We will continue to distribute the fat jar for the time being. The tarball includes a launcher script, which will set a number of JVM arguments for users automatically. We believe this will help users configure Humio for good performance out of the box. For more information, see LogScale Launcher Script.

Search performance via hashfilter-first on segments in buckets

Some searches, including regex and literal string matches, now allow searching without fetching the actual segment files from the bucket, in case the segment is only present in the bucket and not on any local disk. Humio now fetches the hash filter file and uses that to decide if the segment file may have a match before downloading the segment file in this case.

Humio packages can now carry scheduled searches, all types of actions, and files with lookup data (either CSV or JSON formatted). Additionally, we have improved the UI for managing packages, to make it easier to find the package you are looking for. This also marks the point where packages are brought out of beta.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • Deprecates the two GraphQL fields id and contentHash on the File type. The two fields are considered unused, so no alternatives are provided. If you rely on them, please let us know.

  • Deprecates the copyFile GraphQL mutation, as it is no longer used. If you use this mutation, please let us know.

New features and improvements

  • UI Changes

    • Updated the style of the email action template and made the wording used dependent on whether an alert or scheduled search was triggered.

    • Breadcrumbs are aligned across all pages and show the package name with a link when viewing or editing an asset from a package.

    • The left navigation menu hides, and can be opened again, for mobile devices, on organization settings pages and repository settings pages.

    • Cluster management pages style updates.

    • Fixed some styling issue on Query Quotas page.

    • The signup path was removed, together with the corresponding pages.

    • Updated design for Package Marketplace and Installed Packages to make them easier to use and more consistent.

    • Removed the pop-up link to edit an alert or scheduled search when on the form page. This link is only relevant when creating an entity from the search page via a dialog.

    • Identity provider pages style update.

  • GraphQL API

    • Added information about the use of preview fields in the result from calling the GraphQL API. The information will be in the field extensions.preview and will be a list of objects with a name and reason field.

    • The GraphQL DateTime type now supports non-UTC time. Timestamps like 2021-07-18T14:13:09.517+02.0 are now legal, and will be converted to UTC time internally.

    • When using the GraphQL field allowedViewActions, the two previously deprecated actions ChangeAlertsAndNotifiers and ReadEvents are no longer returned. Look for their replacements ChangeTriggersAndActions and ReadContents instead.

    • Deprecates the installPackageFromRegistry and updatePackageFromRegistry GraphQL mutations in favor of installPackageFromRegistryV2 and updatePackageFromRegistryV2.

    • The name, displayName, and location GraphQL fields on the File type are deprecated in favor of the new nameAndPath field.

    • The fileName, displayName, and location GraphQL fields on the UploadedFileSnapshot type are deprecated in favor of the new nameAndPath field.

    • Deprecates the package field on the SearchDomain GraphQL type, in favor of packageV2. The new field has a simpler and more correct return type.

    • Added a GraphQL mutation cancelDeleteEvents that allows cancelling a previously submitted deletion. Cancellation is best-effort, and events that have already been deleted will not be restored.

    • Extended 'Relative' field type for schema files to include support for the value 'now'.

  • Configuration

    • Added compatibility mode for using IBM Cloud Object Storage as bucket storage via S3_STORAGE_IBM_COMPAT

    • The Scheduled Searches feature is no longer in beta and can be used by all users without enabling it first

    • On a node configured as USING_EPHEMERAL_DISKS=true allow the local disk management deleting files even if a query may need them later, as the system is able to re-fetch the files from bucket storage when required. This improves the situation when there are active queries that in total have requested access to more segments than the local disk can hold.

  • Functions

  • Other

    • Added focus states to text field, selection and text area components.

    • Added support for importing packages with CSV and JSON files. Exporting packages with files is not fully supported yet, but will be in a future release.

    • Improved handling of local disk space relative to LOCAL_STORAGE_MIN_AGE_DAYS. When the local disk would overflow by respecting that config, Humio can now delete the oldest local segments that are present in bucket storage, even when they are within that time range.

    • Raise size limit on ingest requests from 8MB to 1 GB

    • Scheduled search "schedule" is explained using human readable text such as "At 9.30 on Tuesdays".

    • Improved search for users page.

    • Package installation error messages are now much more readable.

    • Limit pending ingest requests by rejecting excess invocations. Rejections are signalled as status 429 "Too many requests" and a Retry-After header suggesting to retry in 5 seconds. Limiting starts when queued requests exceed INGEST_REQUEST_LIMIT_PCT of the total heap size, default is 5.

    • Warnings when running scheduled searches now show up as errors in the scheduled search overview page if SCHEDULED_SEARCH_DESPITE_WARNINGS is set to false (the default).

    • Added a Data subprocessors page under account.

    • Improved audit log for organization creation.

    • Added maximum width to tabs on the Group page, so they do not keep expanding forever.

    • Humio docker images is now based on the Alpine linux.

    • New metric: "ingest-request-delay". Histogram of ingest request time spent being delayed due to exceeding limit on concurrent processing of ingest.

    • Added explicit distribution information for elastic bulk API for elasticsearch API compatibility.

    • Allow launching using JDK-16.

    • The test action functionality no longer uses alert terminology, as actions can be invoked from both alerts and scheduled searches. Also, it is now possible to also test the scheduled search specific message templates using it.

    • Improved error handling when running scheduled searches, so that a failed schedules search will be retried as long as it is within the Backfill Limit.

    • Added loading and error states to the page where user selects to create a new repository or view.

    • When selecting actions for alerts or scheduled searches, the actions are now grouped by the package they were imported from.

    • Fixed an issue with using the browser back button while "advanced editing" the query text of a scheduled search or an alert would hide the blue bar that allows saving the query.

    • Added support for including dashboard and alert labels when exporting a package.

    • Scheduled search "schedule" field is now validated, showing accurate help for each part of the crontab expression.

    • You can now export and import packages containing any of the action types: Webhook, Email, Humio Repo, Pager Duty, Slack, Slack multi channel, Ops Genie and Victor Ops.

    • Added Dark Mode for Query Monitor page.

Fixed in this release

  • Documentation

    • Updated the examples on how to use the match() query function in the online documentation.

  • Automation and Alerts

    • Fixed a bug which potentially have caused alerts to not re-fire after the throttle period for field-based throttling had passed.

  • Functions

    • Fixed an issue where top() with max= can yield the same key multiple times (for example ...| top([queryId, query], max=totalSize)).

    • Fixed an issue with the split() function which caused incorrect (usually, too few) query results in some cases where the output fields were refered to later in the query.

  • Other

    • Fixed an issue where the global consistency check job would fail to perform the consistency check, instead logging lines like "Global dump requested but global had expired". This line can still occur, but only when the consistency check takes too long.

    • Amended an internal limit on how many segments can be fetched from bucket storage concurrently. The old limit was based on the number of running queries. The new limit is 32.

    • Fixed an issue where, looking at GraphiQL, the dropdown from the navigation menu was partially hidden.

    • Fixed an issue that could cause cluster nodes to crash when growing the number of digest partitions.

    • Fixed an issue where new groups added to a repository got a query prefix that disallowed search. The default is now to allow search with the queryprefix *.

    • Fixed an issue that caused some errors to be hidden behind a message about "internal error".

    • Reworded a confusing error message when using the top() function with a limit parameter exceeding the limits configured with TOP_K_MAX_MAP_SIZE_HISTORICAL or TOP_K_MAX_MAP_SIZE_LIVE.

    • Fixed an issue that could cause UploadedFileSyncJob to crash if an uploaded file went missing.

    • Updated Slack action for messaging multiple channels, so it propagates errors when triggered. Previously errors were ignored.

    • Truncate long user names on the Users page.

    • Fixed a bug where a 404 Not Found status on an internal endpoint would be incorrectly reported as an 401 Unauthorized.

    • Fixed an issue where Humio would retain segments acquired from read-only buckets if those segments were deleted. Humio will now properly delete the segments locally, and drop the reference to the copy in the read-only bucket.

    • Global snapshots are now uploaded to bucket storage more often when there are a lot of updates to it, leading to shorter replay times on startup.

    • Introduced a check for compatibility for packages and humio versions.

    • Security when viewing installed packages and packages on the marketplace are now less strict. Permissions are still required for installing and uninstalling packages.

    • Fixed an issue where the DiskSpaceJob could continue tracking segments if they were deleted from global, but the files were still present locally.

    • Fixed an issue where certain problems highlighted the first word in a query, not the location of the problem.

    • Creating a new dashboard now opens it after creation.

    • Fixed an issue that caused some metrics of type gauge to be reported with a wrong value.

    • The DiskSpaceJob now removes newly written backfilled segments off the local disk before it chooses to remove non-backfilled segments.

    • Fixed an issue where the {time_zone} Message Templates and Variables for actions would show a full description of the scheduled search instead of only the time zone.

    • Fixed an issue - when creating a repository a user is automatically assigned a role but isn't able to see himself in the roles list. Also, when editing roles the assignment is not counted correctly under usage.

    • Fixed an issue where Humio attempted to fetch global from other nodes before TLS was initialized.

    • Fixed a bug on queries that triggered an error while executing due to the input (such as a regex that exceeds limits on execution time) could result in the client getting 404 as status on poll, where it should get .0.

    • Fixed an issue where Shift+Enter would select the current completion rather than adding a newline.

    • Removed an old Cloud Signups page. The page is not necessary since organizations were implemented for the Cloud environments.

    • Fixed an issue where the DiskSpaceJob could mark segments accessed slightly out of order during boot.

    • Fixed an issue where it was possible to submit queries to the Delete Events API that were not valid for that API. Only pure filtering queries are allowed.

    • When a search is able to filter out segments based on the hash filter files, and a segment file is not present locally on any node, fetch only the hash filter at first, evaluate that, and only if required, fetch the segment file. This speeds up searches that target segments only present in bucket storage and that have search filters that generate hash filter checks, such as regex and literal text comparisons.

    • Fixed a bug where a hidden field named "#humioAutoShard" would sometimes show up in the field list.

    • Split package export page into dialog with multiple steps.

    • Fixed an issue where the job responsible for deleting segment files off nodes was not deleting as many segments as it should.

    • When accessing Humio through a URL with either a repository or view name in it and using an ingest token, it is now checked that the view on the token matches the repository or view in the URL, and a 403 Forbidden status is returned, if not.

    • Fixed an issue where Humio would create a broken hash file for the merge result when merging mini-segments that did not originally have hash files.

    • The DiskSpaceJob no longer initializes based off of the segment last-modified timestamp, this only happens if no access order snapshot is stored locally. If a snapshot is present, we trust that.

    • Fixed a bug causing the disk space job to use an expensive code path even when a cheaper one was available.

    • Fixed an issue where the job responsible for deleting segment files off nodes was not running as often as expected.

    • Cloning an asset now redirects you to the edit page for the asset for all assets.

    • Fixed an issue where the query scheduler would spend too much time "shelving" queries, and not enough on getting them executed, leading to little progress on queries.

    • Fixed an issue where metrics of type gauge with a double value were not reported to the humio-metricsrepository, but only to the humio repository.

    • Fixed thread safety for a variable involved in fetching from bucket storage for queries.

    • Updated the new asset dialog button text so that it will say 'Continue' when an asset will not be created directly.

    • Updated Elastic ingest endpoint to accept 'create' operations in addition to 'index' operations. Both operation types result in the same ingest behavior. This update was added as Fluent-Bit v1.8.3 began using the 'create' operation rather than 'index' for ingest.

    • Fixed an issue where Humio would create auxiliary files (hash files) for segments unnecessarily when moving segments between nodes.

    • Updated dependencies with security fixes.

    • The simple and advanced permission model has been merged, thus allowing users who were using the simple permission model to create their own permission roles and groups, create groups with default roles, and all other features that were previously only available in advanced permissions mode.

Humio Server 1.31.0 GA (2021-09-27)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.31.0GA2021-09-27

Cloud

2022-10-31No1.16.0Yes

Available for download two days after release.

Hide file hashes

Show file hashes

We now distribute Humio as a tarball in addition to the fat jar format we've previously used. We will continue to distribute the fat jar for the time being. The tarball includes a launcher script, which will set a number of JVM arguments for users automatically. We believe this will help users configure Humio for good performance out of the box. For more information, see LogScale Launcher Script.

Search performance via hashfilter-first on segments in buckets

Some searches, including regex and literal string matches, now allow searching without fetching the actual segment files from the bucket, in case the segment is only present in the bucket and not on any local disk. Humio now fetches the hash filter file and uses that to decide if the segment file may have a match before downloading the segment file in this case.

Humio packages can now carry scheduled searches, all types of actions, and files with lookup data (either CSV or JSON formatted). Additionally, we have improved the UI for managing packages, to make it easier to find the package you are looking for. This also marks the point where packages are brought out of beta.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • Deprecates the copyFile GraphQL mutation, as it is no longer used. If you use this mutation, please let us know.

New features and improvements

  • UI Changes

    • Updated the style of the email action template and made the wording used dependent on whether an alert or scheduled search was triggered.

    • The signup path was removed, together with the corresponding pages.

    • Identity provider pages style update.

    • The left navigation menu hides, and can be opened again, for mobile devices, on organization settings pages and repository settings pages.

    • Breadcrumbs are aligned across all pages and show the package name with a link when viewing or editing an asset from a package.

    • Cluster management pages style updates.

    • Removed the pop-up link to edit an alert or scheduled search when on the form page. This link is only relevant when creating an entity from the search page via a dialog.

    • Updated design for Package Marketplace and Installed Packages to make them easier to use and more consistent.

    • Fixed some styling issue on Query Quotas page.

  • Automation and Alerts

    • When selecting actions for alerts or scheduled searches, the actions are now grouped by the package they were imported from.

  • GraphQL API

    • Added a GraphQL mutation cancelDeleteEvents that allows cancelling a previously submitted deletion. Cancellation is best-effort, and events that have already been deleted will not be restored.

    • Deprecates the installPackageFromRegistry and updatePackageFromRegistry GraphQL mutations in favor of installPackageFromRegistryV2 and updatePackageFromRegistryV2.

    • When using the GraphQL field allowedViewActions, the two previously deprecated actions ChangeAlertsAndNotifiers and ReadEvents are no longer returned. Look for their replacements ChangeTriggersAndActions and ReadContents instead.

    • Added information about the use of preview fields in the result from calling the GraphQL API. The information will be in the field extensions.preview and will be a list of objects with a name and reason field.

    • Extended 'Relative' field type for schema files to include support for the value 'now'.

    • Deprecates the two GraphQL fields id and contentHash on the File type. The two fields are considered unused, so no alternatives are provided. If you rely on them, please let us know.

    • Deprecates the package field on the SearchDomain GraphQL type, in favor of packageV2. The new field has a simpler and more correct return type.

    • The name, displayName, and location GraphQL fields on the File type are deprecated in favor of the new nameAndPath field.

    • The fileName, displayName, and location GraphQL fields on the UploadedFileSnapshot type are deprecated in favor of the new nameAndPath field.

    • The GraphQL DateTime type now supports non-UTC time. Timestamps like 2021-07-18T14:13:09.517+02.0 are now legal, and will be converted to UTC time internally.

  • Configuration

    • The Scheduled Searches feature is no longer in beta and can be used by all users without enabling it first

    • On a node configured as USING_EPHEMERAL_DISKS=true allow the local disk management deleting files even if a query may need them later, as the system is able to re-fetch the files from bucket storage when required. This improves the situation when there are active queries that in total have requested access to more segments than the local disk can hold.

    • Added compatibility mode for using IBM Cloud Object Storage as bucket storage via S3_STORAGE_IBM_COMPAT

  • Functions

  • Other

    • Fixed an issue with using the browser back button while "advanced editing" the query text of a scheduled search or an alert would hide the blue bar that allows saving the query.

    • Added support for including dashboard and alert labels when exporting a package.

    • Warnings when running scheduled searches now show up as errors in the scheduled search overview page if SCHEDULED_SEARCH_DESPITE_WARNINGS is set to 'false' (the default).

    • Scheduled search "schedule" is explained using human readable text such as "At 9.30 on Tuesdays".

    • Allow launching using JDK-16.

    • Improved error handling when running scheduled searches, so that a failed schedules search will be retried as long as it is within the Backfill Limit. [backfill

    • You can now export and import packages containing any of the action types: Webhook, Email, Humio Repo, Pager Duty, Slack, Slack multi channel, Ops Genie and Victor Ops.

    • Package installation error messages are now much more readable.

    • Added focus states to text field, selection and text area components.

    • The test action functionality no longer uses alert terminology, as actions can be invoked from both alerts and scheduled searches. Also, it is now possible to also test the scheduled search specific message templates using it.

    • Added Dark Mode for Query Monitor page.

    • Improved handling of local disk space relative to LOCAL_STORAGE_MIN_AGE_DAYS. When the local disk would overflow by respecting that config, Humio can now delete the oldest local segments that are present in bucket storage, even when they are within that time range.

    • Improved search for users page.

    • Added loading and error states to the page where user selects to create a new repository or view.

    • Added explicit distribution information for elastic bulk API for elasticsearch API compatibility.

    • Added support for importing packages with CSV and JSON files. Exporting packages with files is not fully supported yet, but will be in a future release.

    • Humio docker images is now based on the Alpine linux.

    • Added maximum width to tabs on the Group page, so they do not keep expanding forever.

    • Improved audit log for organization creation.

    • Scheduled search "schedule" field is now validated, showing accurate help for each part of the crontab expression.

    • Added a Data subprocessors page under account.

Fixed in this release

  • Documentation

    • Updated the examples on how to use the match() query function in the online documentation.

  • Functions

    • Fixed an issue with the split() function which caused incorrect (usually, too few) query results in some cases where the output fields were refered to later in the query.

    • Fixed an issue where top() with max= can yield the same key multiple times (eg. ...| top([queryId, query], max=totalSize)).

  • Other

    • Fixed an issue where the job responsible for deleting segment files off nodes was not running as often as expected.

    • Fixed an issue where Shift+Enter would select the current completion rather than adding a newline.

    • Removed an old Cloud Signups page. The page is not necessary since organizations were implemented for the Cloud environments.

    • Updated the new asset dialog button text so that it will say 'Continue' when an asset will not be created directly.

    • When a search is able to filter out segments based on the hash filter files, and a segment file is not present locally on any node, fetch only the hash filter at first, evaluate that, and only if required, fetch the segment file. This speeds up searches that target segments only present in bucket storage and that have search filters that generate hash filter checks, such as regex and literal text comparisons.

    • Cloning an asset now redirects you to the edit page for the asset for all assets.

    • Split package export page into dialog with multiple steps.

    • Amended an internal limit on how many segments can be fetched from bucket storage concurrently. The old limit was based on the number of running queries. The new limit is 32.

    • Updated dependencies with security fixes.

    • Fixed an issue where it was possible to submit queries to the Delete Events API that were not valid for that API. Only pure filtering queries are allowed.

    • Fixed an issue where the query scheduler would spend too much time "shelving" queries, and not enough on getting them executed, leading to little progress on queries.

    • Fixed an issue where the global consistency check job would fail to perform the consistency check, instead logging lines like "Global dump requested but global had expired". This line can still occur, but only when the consistency check takes too long.

    • Fixed a bug where a hidden field named "#humioAutoShard" would sometimes show up in the field list.

    • Fixed an issue that could cause UploadedFileSyncJob to crash if an uploaded file went missing.

    • Global snapshots are now uploaded to bucket storage more often when there are a lot of updates to it, leading to shorter replay times on startup.

    • Introduced a check for compatibility for packages and humio versions.

    • Updated Elastic ingest endpoint to accept 'create' operations in addition to 'index' operations. Both operation types result in the same ingest behavior. This update was added as Fluent-Bit v1.8.3 began using the 'create' operation rather than 'index' for ingest.

    • Fixed a bug which potentially have caused alerts to not re-fire after the throttle period for field-based throttling had passed.

    • Fixed an issue where, looking at GraphiQL, the dropdown from the navigation menu was partially hidden.

    • Truncate long user names on the Users page.

    • Fixed an issue where the DiskSpaceJob could mark segments accessed slightly out of order during boot.

    • Fixed thread safety for a variable involved in fetching from bucket storage for queries.

    • Fixed an issue where Humio attempted to fetch global from other nodes before TLS was initialized.

    • The simple and advanced permission model has been merged, thus allowing users who were using the simple permission model to create their own permission roles and groups, create groups with default roles, and all other features that were previously only available in advanced permissions mode.

    • Updated Slack action for messaging multiple channels, so it propagates errors when triggered. Previously errors were ignored.

    • Fixed an issue where new groups added to a repository got a query prefix that disallowed search. The default is now to allow search with the queryprefix *.

    • The DiskSpaceJob now removes newly written backfilled segments off the local disk before it chooses to remove non-backfilled segments.

    • Fixed an issue where the job responsible for deleting segment files off nodes was not deleting as many segments as it should.

    • Security when viewing installed packages and packages on the marketplace are now less strict. Permissions are still required for installing and uninstalling packages.

    • Fixed an issue where the {time_zone} Message Templates and Variables for actions would show a full description of the scheduled search instead of only the time zone.

    • Fixed an issue where certain problems highlighted the first word in a query, not the location of the problem.

    • Fixed an issue that caused some metrics of type gauge to be reported with a wrong value.

    • Fixed an issue that caused some errors to be hidden behind a message about "internal error".

    • Fixed an issue where Humio would create a broken hash file for the merge result when merging mini-segments that did not originally have hash files.

    • The DiskSpaceJob no longer initializes based off of the segment last-modified timestamp, this only happens if no access order snapshot is stored locally. If a snapshot is present, we trust that.

    • Fixed a bug causing the disk space job to use an expensive code path even when a cheaper one was available.

    • Fixed an issue where the DiskSpaceJob could continue tracking segments if they were deleted from global, but the files were still present locally.

    • Reworded a confusing error message when using the top() function with a limit parameter exceeding the limits configured with TOP_K_MAX_MAP_SIZE_HISTORICAL or TOP_K_MAX_MAP_SIZE_LIVE.

    • Fixed an issue that could cause cluster nodes to crash when growing the number of digest partitions.

    • Creating a new dashboard now opens it after creation.

    • Fixed an issue where metrics of type gauge with a double value were not reported to the humio-metrics repository, but only to the humio repository.

    • Fixed a bug where a 404 Not Found status on an internal endpoint would be incorrectly reported as an 401 Unauthorized.

    • Fixed an issue where Humio would create auxiliary files (hash files) for segments unnecessarily when moving segments between nodes.

    • When accessing Humio through a URL with either a repository or view name in it and using an ingest token, it is now checked that the view on the token matches the repository or view in the URL, and a 403 Forbidden status is returned, if not.

    • Fixed a bug on queries that triggered an error while executing due to the input (such as a regex that exceeds limits on execution time) could result in the client getting 404 as status on poll, where it should get .0.

Humio Server 1.30.7 LTS (2022-01-06)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.30.7LTS2022-01-06

Cloud

2022-09-30No1.16.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.30.0, 1.30.1, 1.30.2, 1.30.3, 1.30.4, 1.30.5, 1.30.6

Updated dependencies with security fixes.

Fixed in this release

  • Security

    • Updated dependencies to log4j 2.17.1 to fix CVE-2021-44832 and CVE-2021-45105

    • Updated dependencies to Netty to fix CVE-2021-43797

    • Kafka and xmlsec have been upgraded to address CVE-2021-38153 and CVE-2021-38153.

    • Updated dependencies to address a critical security vulnerability for the log4j logging framework, "log4shell", (CVE-2021-44228).

    • Updated dependencies to log4j 2.16 to remove of message lookups (CVE-2021-45046)

    • Updated dependencies to address a critical security vulnerability for the log4j logging framework, "log4shell", (CVE-2021-44228).

  • Summary

    • Fixed a compatibility issue with Filebeat 7.16.0

  • Other

    • Fixed an issue where the UI page for new parser could have overflow in some browsers.

    • Fixed an issue where a URL without content other than the protocol would break installing a package.

    • Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.

    • Fixed an issue causing Humio to log MatchExceptions from the calculateStartPoint method.

    • Fixed an issue where the query scheduler would spend too much time "shelving" queries, and not enough on getting them executed, leading to little progress on queries.

    • On a node configured as USING_EPHEMERAL_DISKS=true allow the local disk management deleting files even if a query may need them later, as the system is able to re-fetch the files from bucket storage when required. This improves the situation when there are active queries that in total have requested access to more segments than the local disk can hold.

    • Fixed an issue where the job responsible for deleting segment files off nodes was not running as often as expected.

    • Require organization level permission when changing role permissions that possibly affects all views and repositories.

    • Fixed an issue where the job responsible for deleting segment files off nodes was not deleting as many segments as it should.

    • Updated a dependency to a version fixing a critical bug.

    • Fixed an issue where offsets from one Kafka partition could be used when deciding where to start consuming for another partition, in the case where there are too many datasources in the repo. This led to a crash loop when the affected node was restarted.

Humio Server 1.30.6 LTS (2021-12-15)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.30.6LTS2021-12-15

Cloud

2022-09-30No1.16.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.30.0, 1.30.1, 1.30.2, 1.30.3, 1.30.4, 1.30.5

Fix log4j dependencies.

Fixed in this release

  • Security

    • Kafka and xmlsec have been upgraded to address CVE-2021-38153 and CVE-2021-38153.

    • Updated dependencies to address a critical security vulnerability for the log4j logging framework, "log4shell", (CVE-2021-44228).

    • Updated dependencies to log4j 2.16 to remove of message lookups (CVE-2021-45046)

    • Updated dependencies to address a critical security vulnerability for the log4j logging framework, "log4shell", (CVE-2021-44228).

  • Summary

    • Fixed a compatibility issue with Filebeat 7.16.0

  • Other

    • Fixed an issue where the UI page for new parser could have overflow in some browsers.

    • Fixed an issue where a URL without content other than the protocol would break installing a package.

    • Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.

    • Fixed an issue causing Humio to log MatchExceptions from the calculateStartPoint method.

    • Fixed an issue where the query scheduler would spend too much time "shelving" queries, and not enough on getting them executed, leading to little progress on queries.

    • On a node configured as USING_EPHEMERAL_DISKS=true allow the local disk management deleting files even if a query may need them later, as the system is able to re-fetch the files from bucket storage when required. This improves the situation when there are active queries that in total have requested access to more segments than the local disk can hold.

    • Fixed an issue where the job responsible for deleting segment files off nodes was not running as often as expected.

    • Require organization level permission when changing role permissions that possibly affects all views and repositories.

    • Fixed an issue where the job responsible for deleting segment files off nodes was not deleting as many segments as it should.

    • Updated a dependency to a version fixing a critical bug.

    • Fixed an issue where offsets from one Kafka partition could be used when deciding where to start consuming for another partition, in the case where there are too many datasources in the repo. This led to a crash loop when the affected node was restarted.

Humio Server 1.30.5 LTS (2021-12-10)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.30.5LTS2021-12-10

Cloud

2022-09-30No1.16.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.30.0, 1.30.1, 1.30.2, 1.30.3, 1.30.4

Fix log4j dependencies.

Fixed in this release

  • Security

    • Kafka and xmlsec have been upgraded to address CVE-2021-38153 and CVE-2021-38153.

    • Updated dependencies to address a critical security vulnerability for the log4j logging framework, "log4shell", (CVE-2021-44228).

    • Updated dependencies to address a critical security vulnerability for the log4j logging framework, "log4shell", (CVE-2021-44228).

  • Summary

    • Fixed a compatibility issue with Filebeat 7.16.0

  • Other

    • Fixed an issue where the UI page for new parser could have overflow in some browsers.

    • Fixed an issue where a URL without content other than the protocol would break installing a package.

    • Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.

    • Fixed an issue causing Humio to log MatchExceptions from the calculateStartPoint method.

    • Fixed an issue where the query scheduler would spend too much time "shelving" queries, and not enough on getting them executed, leading to little progress on queries.

    • On a node configured as USING_EPHEMERAL_DISKS=true allow the local disk management deleting files even if a query may need them later, as the system is able to re-fetch the files from bucket storage when required. This improves the situation when there are active queries that in total have requested access to more segments than the local disk can hold.

    • Fixed an issue where the job responsible for deleting segment files off nodes was not running as often as expected.

    • Require organization level permission when changing role permissions that possibly affects all views and repositories.

    • Fixed an issue where the job responsible for deleting segment files off nodes was not deleting as many segments as it should.

    • Updated a dependency to a version fixing a critical bug.

    • Fixed an issue where offsets from one Kafka partition could be used when deciding where to start consuming for another partition, in the case where there are too many datasources in the repo. This led to a crash loop when the affected node was restarted.

Humio Server 1.30.4 LTS (2021-12-10)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.30.4LTS2021-12-10

Cloud

2022-09-30No1.16.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.30.0, 1.30.1, 1.30.2, 1.30.3

Security fix related to log4j logging, and fix compatibility with Filebeat.

Fixed in this release

  • Security

    • Kafka and xmlsec have been upgraded to address CVE-2021-38153 and CVE-2021-38153.

    • Updated dependencies to address a critical security vulnerability for the log4j logging framework, "log4shell", (CVE-2021-44228).

  • Summary

    • Fixed a compatibility issue with Filebeat 7.16.0

  • Other

    • Fixed an issue where the UI page for new parser could have overflow in some browsers.

    • Fixed an issue where a URL without content other than the protocol would break installing a package.

    • Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.

    • Fixed an issue causing Humio to log MatchExceptions from the calculateStartPoint method.

    • Fixed an issue where the query scheduler would spend too much time "shelving" queries, and not enough on getting them executed, leading to little progress on queries.

    • On a node configured as USING_EPHEMERAL_DISKS=true allow the local disk management deleting files even if a query may need them later, as the system is able to re-fetch the files from bucket storage when required. This improves the situation when there are active queries that in total have requested access to more segments than the local disk can hold.

    • Fixed an issue where the job responsible for deleting segment files off nodes was not running as often as expected.

    • Require organization level permission when changing role permissions that possibly affects all views and repositories.

    • Fixed an issue where the job responsible for deleting segment files off nodes was not deleting as many segments as it should.

    • Updated a dependency to a version fixing a critical bug.

    • Fixed an issue where offsets from one Kafka partition could be used when deciding where to start consuming for another partition, in the case where there are too many datasources in the repo. This led to a crash loop when the affected node was restarted.

Humio Server 1.30.3 LTS (2021-11-25)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.30.3LTS2021-11-25

Cloud

2022-09-30No1.16.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.30.0, 1.30.1, 1.30.2

Bug fix to resolve problem with race conditions.

Fixed in this release

  • Security

    • Kafka and xmlsec have been upgraded to address CVE-2021-38153 and CVE-2021-38153.

  • Other

    • Fixed an issue where the UI page for new parser could have overflow in some browsers.

    • Fixed an issue where a URL without content other than the protocol would break installing a package.

    • Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.

    • Fixed an issue causing Humio to log MatchExceptions from the calculateStartPoint method.

    • Fixed an issue where the query scheduler would spend too much time "shelving" queries, and not enough on getting them executed, leading to little progress on queries.

    • On a node configured as USING_EPHEMERAL_DISKS=true allow the local disk management deleting files even if a query may need them later, as the system is able to re-fetch the files from bucket storage when required. This improves the situation when there are active queries that in total have requested access to more segments than the local disk can hold.

    • Fixed an issue where the job responsible for deleting segment files off nodes was not running as often as expected.

    • Require organization level permission when changing role permissions that possibly affects all views and repositories.

    • Fixed an issue where the job responsible for deleting segment files off nodes was not deleting as many segments as it should.

    • Updated a dependency to a version fixing a critical bug.

    • Fixed an issue where offsets from one Kafka partition could be used when deciding where to start consuming for another partition, in the case where there are too many datasources in the repo. This led to a crash loop when the affected node was restarted.

Humio Server 1.30.2 LTS (2021-11-19)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.30.2LTS2021-11-19

Cloud

2022-09-30No1.16.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.30.0, 1.30.1

Bug fixes related to version dependency, problems with incomplete URLS, as well as requiring organization level permissions in certain situations.

Fixed in this release

  • Security

    • Kafka and xmlsec have been upgraded to address CVE-2021-38153 and CVE-2021-38153.

  • Other

    • Fixed an issue where the UI page for new parser could have overflow in some browsers.

    • Fixed an issue where a URL without content other than the protocol would break installing a package.

    • Fixed an issue causing Humio to log MatchExceptions from the calculateStartPoint method.

    • Fixed an issue where the query scheduler would spend too much time "shelving" queries, and not enough on getting them executed, leading to little progress on queries.

    • On a node configured as USING_EPHEMERAL_DISKS=true allow the local disk management deleting files even if a query may need them later, as the system is able to re-fetch the files from bucket storage when required. This improves the situation when there are active queries that in total have requested access to more segments than the local disk can hold.

    • Fixed an issue where the job responsible for deleting segment files off nodes was not running as often as expected.

    • Require organization level permission when changing role permissions that possibly affects all views and repositories.

    • Fixed an issue where the job responsible for deleting segment files off nodes was not deleting as many segments as it should.

    • Updated a dependency to a version fixing a critical bug.

    • Fixed an issue where offsets from one Kafka partition could be used when deciding where to start consuming for another partition, in the case where there are too many datasources in the repo. This led to a crash loop when the affected node was restarted.

Humio Server 1.30.1 LTS (2021-10-01)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.30.1LTS2021-10-01

Cloud

2022-09-30No1.16.0Yes

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.30.0

Fixes Humio ignoring MatchExceptions, the frequency of jobs which delete segment files, problems with USING_EPHEMERAL_DISKS, and upgrades Kafka and xmlsec addresses.

Fixed in this release

  • Security

    • Kafka and xmlsec have been upgraded to address CVE-2021-38153 and CVE-2021-38153.

  • Other

    • Fixed an issue where the UI page for new parser could have overflow in some browsers.

    • Fixed an issue causing Humio to log MatchExceptions from the calculateStartPoint method.

    • Fixed an issue where the query scheduler would spend too much time "shelving" queries, and not enough on getting them executed, leading to little progress on queries.

    • On a node configured as USING_EPHEMERAL_DISKS=true allow the local disk management deleting files even if a query may need them later, as the system is able to re-fetch the files from bucket storage when required. This improves the situation when there are active queries that in total have requested access to more segments than the local disk can hold.

    • Fixed an issue where the job responsible for deleting segment files off nodes was not running as often as expected.

    • Fixed an issue where the job responsible for deleting segment files off nodes was not deleting as many segments as it should.

    • Fixed an issue where offsets from one Kafka partition could be used when deciding where to start consuming for another partition, in the case where there are too many datasources in the repo. This led to a crash loop when the affected node was restarted.

Humio Server 1.30.0 LTS (2021-09-17)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.30.0LTS2021-09-17

Cloud

2022-09-30No1.16.0Yes

Hide file hashes

Show file hashes

As a new feature Humio now includes an IOC (indicator of compromise) database from CrowdStrike to enable lookup of IP addresses, URLs and domains for malicious activity. This database is updated hourly. This is described in more detail at ioc:lookup()

Deprecation

Items that have been deprecated and may be removed in a future release.

  • Deprecated GraphQL mutation setRecentQueries, use addRecentQuery in future. The mutation will be removed after 2021-10-01. While setRecentQueries will remain in the API to not break existing clients, it will not modify the set of recent queries.

Fixed in this release

  • Other

    • Fixed an issue where the UI page for new parser could have overflow in some browsers.

    • Fixed an issue where the query scheduler would spend too much time "shelving" queries, and not enough on getting them executed, leading to little progress on queries.

    • Fixed an issue where offsets from one Kafka partition could be used when deciding where to start consuming for another partition, in the case where there are too many datasources in the repo. This led to a crash loop when the affected node was restarted.

Humio Server 1.30.0 Includes the following changes made in Humio Server 1.29.0

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.29.0GA2021-07-09

Cloud

2022-09-30No1.16.0Yes

Available for download two days after release.

Warning

This release has been revoked as it contained a known bug fixed in 1.29.1.

As a new feature Humio now includes an IOC (indicator of compromise) database from CrowdStrike to enable lookup of IP addresses, URLs and domains for malicious activity. This database is updated hourly. This is described in more detail at ioc:lookup()

Removed

Items that have been removed as of this release.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • Deprecated GraphQL mutation setRecentQueries, use addRecentQuery in future. The mutation will be removed after 2021-10-01. While setRecentQueries will remain in the API to not break existing clients, it will not modify the set of recent queries.

  • Field addIngestToken was deprecated in Mutation type, use addIngestTokenV2 instead.

  • Field assignIngestToken was deprecated in Mutation type, use assignParserToIngestToken instead.

New features and improvements

Fixed in this release

Humio Server 1.30.0 Includes the following changes made in Humio Server 1.29.1

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.29.1GA2021-07-12

Cloud

2022-09-30No1.16.0Yes

Available for download two days after release.

Hide file hashes

Show file hashes

Bug fixes.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • Deprecated GraphQL mutation setRecentQueries, use addRecentQuery in future. The mutation will be removed after 2021-10-01. While setRecentQueries will remain in the API to not break existing clients, it will not modify the set of recent queries.

Fixed in this release

Humio Server 1.30.0 Includes the following changes made in Humio Server 1.29.2

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.29.2GA2021-09-02

Cloud

2022-09-30No1.16.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Minor bug fixes

Deprecation

Items that have been deprecated and may be removed in a future release.

  • Deprecated GraphQL mutation setRecentQueries, use addRecentQuery in future. The mutation will be removed after 2021-10-01. While setRecentQueries will remain in the API to not break existing clients, it will not modify the set of recent queries.

Fixed in this release

Humio Server 1.30.0 Includes the following changes made in Humio Server 1.29.3

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.29.3GA2021-09-07

Cloud

2022-09-30No1.16.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Minor bug fixes

Deprecation

Items that have been deprecated and may be removed in a future release.

  • Deprecated GraphQL mutation setRecentQueries, use addRecentQuery in future. The mutation will be removed after 2021-10-01. While setRecentQueries will remain in the API to not break existing clients, it will not modify the set of recent queries.

Fixed in this release

Humio Server 1.30.0 Includes the following changes made in Humio Server 1.29.4

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.29.4GA2021-09-09

Cloud

2022-09-30No1.16.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Minor bug fixes

Deprecation

Items that have been deprecated and may be removed in a future release.

  • Deprecated GraphQL mutation setRecentQueries, use addRecentQuery in future. The mutation will be removed after 2021-10-01. While setRecentQueries will remain in the API to not break existing clients, it will not modify the set of recent queries.

Fixed in this release

Humio Server 1.29.4 GA (2021-09-09)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.29.4GA2021-09-09

Cloud

2022-09-30No1.16.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Minor bug fixes

Deprecation

Items that have been deprecated and may be removed in a future release.

  • Deprecated GraphQL mutation setRecentQueries, use addRecentQuery in future. The mutation will be removed after 2021-10-01. While setRecentQueries will remain in the API to not break existing clients, it will not modify the set of recent queries.

Fixed in this release

  • Other

    • Added a GraphQL mutation cancelDeleteEvents that allows cancelling a previously submitted deletion. Cancellation is best-effort, and events that have already been deleted will not be restored.

    • Fixed an issue where it was possible to submit queries to the Delete Events API that were not valid for that API. Only purely filtering queries are allowed.

Humio Server 1.29.3 GA (2021-09-07)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.29.3GA2021-09-07

Cloud

2022-09-30No1.16.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Minor bug fixes

Deprecation

Items that have been deprecated and may be removed in a future release.

  • Deprecated GraphQL mutation setRecentQueries, use addRecentQuery in future. The mutation will be removed after 2021-10-01. While setRecentQueries will remain in the API to not break existing clients, it will not modify the set of recent queries.

Fixed in this release

  • Other

    • Fixed an issue where the error TooManyTagValueCombination would prevent Humio from starting

    • Remove limit on search interval on cloud sandboxes

Humio Server 1.29.2 GA (2021-09-02)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.29.2GA2021-09-02

Cloud

2022-09-30No1.16.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Minor bug fixes

Deprecation

Items that have been deprecated and may be removed in a future release.

  • Deprecated GraphQL mutation setRecentQueries, use addRecentQuery in future. The mutation will be removed after 2021-10-01. While setRecentQueries will remain in the API to not break existing clients, it will not modify the set of recent queries.

Fixed in this release

  • Other

    • Fixed an issue where if a package failed to be installed, and it contained an action, the failed installation might not be cleaned up properly.

    • Fixed an issue where, looking at GraphQL, the dropdown from the navigation menu was partially hidden.

    • Fixed an issue that could cause UploadedFileSyncJob to crash, if an uploaded file went missing

    • Fixed an issue where new groups added to a repository got a query prefix that disallowed search. The default is now to allow search with the query prefix * (wildcard).

Humio Server 1.29.1 GA (2021-07-12)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.29.1GA2021-07-12

Cloud

2022-09-30No1.16.0Yes

Available for download two days after release.

Hide file hashes

Show file hashes

Bug fixes.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • Deprecated GraphQL mutation setRecentQueries, use addRecentQuery in future. The mutation will be removed after 2021-10-01. While setRecentQueries will remain in the API to not break existing clients, it will not modify the set of recent queries.

Fixed in this release

  • Other

    • Fixed an issue that made it appear as though ingest tokens had no associated parser.

Humio Server 1.29.0 GA (2021-07-09)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.29.0GA2021-07-09

Cloud

2022-09-30No1.16.0Yes

Available for download two days after release.

Warning

This release has been revoked as it contained a known bug fixed in 1.29.1.

As a new feature Humio now includes an IOC (indicator of compromise) database from CrowdStrike to enable lookup of IP addresses, URLs and domains for malicious activity. This database is updated hourly. This is described in more detail at ioc:lookup()

Removed

Items that have been removed as of this release.

GraphQL API

  • Deprecated argument repositoryName was removed from Mutation.updateParser field.

  • Deprecated argument name was removed from Mutation.updateParser field.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • Deprecated GraphQL mutation setRecentQueries, use addRecentQuery in future. The mutation will be removed after 2021-10-01. While setRecentQueries will remain in the API to not break existing clients, it will not modify the set of recent queries.

  • Field addIngestToken was deprecated in Mutation type, use addIngestTokenV2 instead.

  • Field assignIngestToken was deprecated in Mutation type, use assignParserToIngestToken instead.

New features and improvements

  • Automation and Alerts

    • Integrates the editing of alert searches and scheduled searches better with the search page.

    • Packages now support Webhook actions and references between these and alerts in the Alert schema.

  • GraphQL API

    • Field createIngestListener was deprecated in Mutation type, use createIngestListenerV2 instead

    • Removed the Usage feature flag which is now always enabled. This breaks backwards compatibility for internal graphql feature flag mutations and queries.

    • Field updateIngestListener was deprecated in Mutation type, use updateIngestListenerV2 instead

    • Field copyParser was deprecated in Mutation type, use cloneParser instead

    • Removed the argument includeUsageView from the GraphQL mutation createOrganizationsViews which breaks backwards compatibility for this internal utility method.

  • Configuration

    • Humio nodes will now pick a UUID for themselves using the ZOOKEEPER_PREFIX_FOR_NODE_UUID prefix, even if ZooKeeper is not used. This should make it easier to enable ZooKeeper id management in existing clusters going forward.

    • Allow the internal profiler to be configured via an environment variable. See Environment Variables

    • Add a soft limit on the primary disk based on PRIMARY_STORAGE_PERCENTAGE and PRIMARY_STORAGE_MAX_FILL_PERCENTAGE (roughly the average of the two values). When the soft limit is hit and secondary storage is configured, the segment mover will prefer moving segments to secondary storage right away, instead of fetching them to primary and waiting for the secondary storage transfer job to move them.

  • Other

    • Internal change to parsers adding an id, where previously they only had a name as key.

    • Enabled dark mode for cluster administration pages.

    • The "Save Search as Dashboard" Widget dialog now gives user feedback about missing input in a manner consistent with other forms.

    • Make GlobalConsistencyCheckerJob shut down more cleanly, it could previously log some ugly exceptions during shutdown.

    • When editing a query, Enter no longer accepts a suggestion. Use Tab instead. The Enter key conflicted with the "Run" button for running the query.

    • Organization pages refactoring.

    • Previously, the server could report that a user was allowed to update parsers for a view, even though parsers cannot be used on views, only repositories. Now the server will always say the user cannot change parsers on views.

    • Improved global snapshot selection in cases where a Kafka reset has been performed

    • In thread dumps include the job and query names in separate fields rather than as part of the thread name.

    • Return the responder's vhost in the metadata json.

    • Added dark mode support to Identity provider pages.

    • Created a new Dropdown component, and replaced some uses of the old component with the new.

    • Speed up the SecondaryStorageTransferJob. The job will now delete primary copies much earlier after moving them to the secondary volume.

    • Scheduled searches are now allowed to run once every minute instead of only once every hour.

Fixed in this release

  • Functions

    • Fixed a bug causing match() to let an empty key field match a table with no rows.

  • Other

    • Fixed an issue with "show in context" feature of the event list did not quote the field names in the produced query string.

    • Fixed a bug in the Search View. After editing and saving a saved query in the Search View, the notification message would disappear in an instant, making it impossible to read and to click the link therein.

    • Fixed an issue where exporting a saved query did not include the options for the visualization, e.g. column layout on the event list.

    • Avoiding a costly corner case in some uses of glob-patterns.

    • Fixed a bug in the blocklist which caused "exact pattern" query patterns to be interpreted as glob patterns.

    • Fixed an issue related to validation of integer arguments. Large integer arguments would be silently truncated and lower limits weren't checked, which led to unspecified behavior. Range errors are now reported in the following functions:

    • Fixed an issue where the axis titles on the timechart were not showing up in dark mode

    • Fixed race condition that could cause parsers to not update correctly in rare cases

    • Fixed a bug where word wrapping in the event list was not always working for log messages with syntax highlighting (e.g. JSON or XML messages)

    • Fixed race condition that could cause event forwarding rules to not update correctly in rare cases

    • When testing a Parser and more events are returned in a test an info message is now displayed conveying that only the first event is shown.

    • Fixed bugs in the test parser UI, so that it should now always produce a result and be able to handle parsers that either drop events or produce multiple events per input event.

    • Address edge cases where QueryScheduler could throw exceptions with messages similar to "Requirement failed on activeMapperCount=-36"

Humio Server 1.28.2 LTS (2021-09-29)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.28.2LTS2021-09-29

Cloud

2022-06-30No1.16.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.28.0, 1.28.1

Bug fixes and updates.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • Deprecated GraphQL mutation setRecentQueries, use addRecentQuery in future. The mutation will be removed after 2021-10-01. While setRecentQueries will remain in the API to not break existing clients, it will not modify the set of recent queries.

Fixed in this release

  • Security

    • Kafka and xmlsec have been upgraded to address CVE-2021-38153 and CVE-2021-38153.

  • Summary

    • When searching through files in a dashboard parameter, users with CSV files greater than 50.0 records could see incomplete results.

    • Fixed a bug that caused previous 1.27.x but not earlier versions to add "host xyz is slow" warnings to query results also when that was not the case.

    • While waiting for the upload of files to bucket to complete during shutdown, the threaddumping will continue running, and the node will report as alive as seen from the other nodes.

    • All users (including existing users) need to accept the privacy notice and terms and conditions before using Humio.

    • Humio trial installations now require a trial license. To request a trial license go to Getting Started.

    • Backfilled data gets lower priority on local disk when in over-commit mode using bucket storage.

    • Humio will now try to upload more segments concurrently during a shutdown than during normal operation.

  • Other

    • The signup path was removed, together with the corresponding pages. Before, anyone could sign up for the Humio SaaS solution. However, with stricter policies, this became obsolete and had to be removed. The new process redirecta a potential customer to Humio's official website, where they have to fill in a form in order to be vetted. Once the vetting process is complete, Humio support creates an organization for the customer.

    • Fixed an issue that could cause UploadedFileSyncJob to crash if an uploaded file went missing

    • Fixed an issue that could cause cluster nodes to crash when growing the number of digest partitions.

    • Fix a bug where offsets from one Kafka partition could be used when deciding where to start consuming for another partition, in the case where there are too many datasources in the repo. This led to a crash loop when the affected node was restarted.

Humio Server 1.28.1 LTS (2021-08-24)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.28.1LTS2021-08-24

Cloud

2022-06-30No1.16.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.28.0

Bug fixes and updates.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • Deprecated GraphQL mutation setRecentQueries, use addRecentQuery in future. The mutation will be removed after 2021-10-01. While setRecentQueries will remain in the API to not break existing clients, it will not modify the set of recent queries.

Fixed in this release

  • Summary

    • When searching through files in a dashboard parameter, users with CSV files greater than 50.0 records could see incomplete results.

    • Fixed a bug that caused previous 1.27.x but not earlier versions to add "host xyz is slow" warnings to query results also when that was not the case.

    • While waiting for the upload of files to bucket to complete during shutdown, the threaddumping will continue running, and the node will report as alive as seen from the other nodes.

    • All users (including existing users) need to accept the privacy notice and terms and conditions before using Humio.

    • Humio trial installations now require a trial license. To request a trial license go to Getting Started.

    • Backfilled data gets lower priority on local disk when in over-commit mode using bucket storage.

    • Humio will now try to upload more segments concurrently during a shutdown than during normal operation.

  • Other

    • The signup path was removed, together with the corresponding pages. Before, anyone could sign up for the Humio SaaS solution. However, with stricter policies, this became obsolete and had to be removed. The new process redirecta a potential customer to Humio's official website, where they have to fill in a form in order to be vetted. Once the vetting process is complete, Humio support creates an organization for the customer.

    • Fixed an issue that could cause UploadedFileSyncJob to crash if an uploaded file went missing

    • Fixed an issue that could cause cluster nodes to crash when growing the number of digest partitions.

Humio Server 1.28.0 LTS (2021-06-15)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.28.0LTS2021-06-15

Cloud

2022-06-30No1.16.0Yes

Hide file hashes

Show file hashes

Major changes, as well as requiring at least a trial license, require accepting privacy, terms, and conditions.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • Deprecated GraphQL mutation setRecentQueries, use addRecentQuery in future. The mutation will be removed after 2021-10-01. While setRecentQueries will remain in the API to not break existing clients, it will not modify the set of recent queries.

Fixed in this release

  • Summary

    • When searching through files in a dashboard parameter, users with CSV files greater than 50.0 records could see incomplete results.

    • Fixed a bug that caused previous 1.27.x but not earlier versions to add "host xyz is slow" warnings to query results also when that was not the case.

    • While waiting for the upload of files to bucket to complete during shutdown, the threaddumping will continue running, and the node will report as alive as seen from the other nodes.

    • All users (including existing users) need to accept the privacy notice and terms and conditions before using Humio.

    • Humio trial installations now require a trial license. To request a trial license go to Getting Started.

    • Backfilled data gets lower priority on local disk when in over-commit mode using bucket storage.

    • Humio will now try to upload more segments concurrently during a shutdown than during normal operation.

Humio Server 1.28.0 Includes the following changes made in Humio Server 1.27.0

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.27.0GA2021-06-14

Cloud

2022-06-30No1.16.0Yes

Available for download two days after release.

Hide file hashes

Show file hashes

Bug fixes and updates.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • Deprecated GraphQL mutation setRecentQueries, use addRecentQuery in future. The mutation will be removed after 2021-10-01. While setRecentQueries will remain in the API to not break existing clients, it will not modify the set of recent queries.

New features and improvements

Fixed in this release

Humio Server 1.28.0 Includes the following changes made in Humio Server 1.27.1

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.27.1GA2021-06-15

Cloud

2022-06-30No1.16.0Yes

Available for download two days after release.

Hide file hashes

Show file hashes

Security fixes and some minor fixes.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • Deprecated GraphQL mutation setRecentQueries, use addRecentQuery in future. The mutation will be removed after 2021-10-01. While setRecentQueries will remain in the API to not break existing clients, it will not modify the set of recent queries.

Fixed in this release

Humio Server 1.27.1 GA (2021-06-15)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.27.1GA2021-06-15

Cloud

2022-06-30No1.16.0Yes

Available for download two days after release.

Hide file hashes

Show file hashes

Security fixes and some minor fixes.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • Deprecated GraphQL mutation setRecentQueries, use addRecentQuery in future. The mutation will be removed after 2021-10-01. While setRecentQueries will remain in the API to not break existing clients, it will not modify the set of recent queries.

Fixed in this release

  • Summary

    • Fixed an issue where Humio could prematurely clean up local copies of segments involved in queries, causing queries to fail with a "Did not query segment" warning.

    • Updated dependencies with security fixes.

    • Fixed issue where certain queries would cause NullPointerException in OneForOneStrategy.

Humio Server 1.27.0 GA (2021-06-14)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.27.0GA2021-06-14

Cloud

2022-06-30No1.16.0Yes

Available for download two days after release.

Hide file hashes

Show file hashes

Bug fixes and updates.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • Deprecated GraphQL mutation setRecentQueries, use addRecentQuery in future. The mutation will be removed after 2021-10-01. While setRecentQueries will remain in the API to not break existing clients, it will not modify the set of recent queries.

New features and improvements

  • Automation and Alerts

    • Fixed an issue where it was possible to create an alert with an empty time interval or a blank name or throttle field.

    • The Alert and Scheduled Search dialogs have gotten a makeover.

  • GraphQL API

    • Deprecated GraphQL field SearchDomain.recentQueries in favor of SearchDomain.recentQueriesV2.

  • Configuration

    • Removed the log4j2-stdout-json.xml configuration file. The replacement log4j2-json-stdout.xml has been available for a while, and we want everyone to move to the new configuration, as the old configuration produces logs incompatible with the Insights Package.

    • Limit how many times we'll repeat a repeating regexp. The default max number of repetitions is .0 but the value is configurable between 50 and .0 by setting the MAX_REGEX_REPETITIONS env variable.

  • Functions

    • With worldMap() function, you can now see the magnitude value by hovering marks on the map.

    • Fixed an issue in timeChart() where the horizontal line did not showing up.

    • Reduced memory usage for groupBy() function, etc.; worst-case in particular but also average-case to some degree.

  • Other

    • Inviting users on cloud now requires the invited user to accept the invitation before assigning permissions to him. Moreover, it is possible to invite users who are in another organization on cloud.

    • Fixed an issue where worldmap widgets would revert to event list widgets when changing styling options.

    • Working on merging of advanced and simple permission models, so that the roles can be added directly to users.

    • Fixed a problem when some user-defined styles weren't being applied to a chart after a page refresh or when exported to a dashboard widget

    • Improve thread safety of updates to global Hosts entities during bootup

    • Started internal work on memory quotas on queries' aggregation states. This should not have any user-visible impact yet.

    • Changed implementation of cluster host alive stats to attempt to reduce thread contention when threads are checking for host liveness.

    • Removed requirement that SAML Id needs to be an URL (Now, only requirement is that the field is not empty)

    • Fixed an issue which caused queries to crash when "beta:repeating()" was used with a time interval ending before "now".

    • The New Action dialog validates user input in a more indulgent fashion and provides all validation errors consistently.

    • Add a label to the empty option for default queries on the repository settings page.

    • Fixed an issue with AuthenticationMethod.SetByProxy where the search page would constantly reload.

    • Users with read repository permissions can now access and see files.

    • Added button to delete the organization from the Organization Overview page

    • Reimplement several part of Humio to use a safer mechanism for listening to changes from global. This should eliminate a class of race condition that could cause nodes to ignore updates received during the boot process.

    • The UI now consistently marks required field with a red asterisk across a number of dialogs.

    • Fixed various bugs for the worldmap widget. The bug fixes may cause your world map marks to look different that previously, but should now work as intended and correcting it should be as simple as tweaking the style parameters.

    • When looking at the details of an event, long field values will now extend beyond the viewport by default. Word wrapping can be enabled to stop it from extending outside the viewport.

    • Improved error messages when exporting invalid dashboards as templates

    • Changed implementation of cluster host alive stats to trigger updates in the in-memory state based on changes in global, rather than running periodic updates.

    • Updated the interactive tutorial with better descriptions

    • Fixed an issue where UI stalled on the "Data Sources" page

    • When assigning a role, all the user which need a new role are choosen, and then the same role is assigned to them all.

    • Added frontend validation on welcome page and invitation page fields

    • Improved styling of header on organization overview page

    • The list of recent queries on the search page now has headers with the date the query was run.

    • Added ability to set organization usage limits manually for cases where automatic synchronization is not possible.

    • Automatically reduce the precision of world maps when they exceed a certain size limit

    • Fixed an issue for Firefox 78.10.1 ESR where the event list and event distribution chart would not be scrollable and resize incorrectly.

    • The Humio frontend no longer sends the Humio-Query-Session header to the backend, since it is no longer used.

    • Fixed an issue where optimizeAndSaveQueryCoordinationPartitions could attempt to save a partitioning table to global with gaps in the partition list. This caused queries to fail, and repeated logging of a validation error.

    • The event distribution chart would sometimes show a bucket span reported in milliseconds instead of a more appropriate unit, when those milliseconds did not add up cleanly (e.g. "1h"). Now the bucket span can be reported with multiple units (e.g. "1h 30m")

    • Add a bit more debug logging to DataSnapshotLoader, for visibility around choice of global snapshot during boot

    • In the time selector, you can now write "24" in the time-of-day field to denote the end of the day.

    • Debug logs which relate to the invocation of an action now contain an actionInvocationId. This trace id is the same for all logs generated by the same action invocation.

    • Fixed an issue in the Query State Cache that could fail a query on time intervals with a fixed timestamp as start and now as end.

    • Fixed an OIDC group synchronization issue where users where denied access even though their group membership gave them access.

    • Included both ws and wss in csp header

    • Fixed a problem where the global consistency check would report spurious inconsistencies because of trivial differences in the underlying JSON data

    • Added a quickfix feature for reserved keywords

    • Fixed a rare issue that could fail to trigger a JVM shutdown if the Kafka digest leader loop thread became nonfunctional.

    • Slightly improve performance of id lookups in global

Fixed in this release

  • Other

    • Humio trial installations now require a trial license. To request a trial license go to getting-started

    • All users (including existing users) need to accept the privacy notice and terms and conditions https://www.crowdstrike.com/terms-conditions/humio-self-hosted before using Humio.

Humio Server 1.26.3 LTS (2021-06-17)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.26.3LTS2021-06-17

Cloud

2022-05-31No1.16.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.26.0, 1.26.1, 1.26.2

Security fixes and some minor fixes related to Firefox, Worldmap widgets, and problems with local file clean-up.

Fixed in this release

  • Summary

    • Fixed an issue where Worldmap widgets would revert to event list widgets when changing styling options.

    • Fix an issue where data was not visible on the World Map until the opacity setting had been changed.

    • Fix an issue when some user-defined styles weren't being applied to a chart after a page refresh or when exported to a dashboard widget.

    • Fixed an issue for Firefox 78.10.1 ESR where the event list and event distribution chart would not be scrollable and resize incorrectly.

    • Update the minimum Humio version for Hosts in global when downgrading a node

    • Fixes an OIDC group synchronization issue where users where denied access even though their group membership gave them access.

    • Fixed an issue where Humio could prematurely clean up local copies of segments involved in queries, causing queries to fail with a "Did not query segment" warning.

    • Fixes issue where the world map widget would misbehave in different ways.

    • Fixes an issue in Timechart with horizontal line not showing up.

    • Fix an issue where optimizeAndSaveQueryCoordinationPartitions could attempt to save a partitioning table to global with gaps in the partition list. This caused queries to fail, and repeated logging of a validation error.

    • Updated dependencies with security fixes.

    • Fix a number of cases where Humio could attempt to write a message to global larger than permitted.

    • All users (including existing users) need to accept the privacy notice and terms and conditions before using Humio.

    • Humio trial installations requires a trial license from this version, to request a trial license go to getting-started https://www.humio.com/getting-started/

  • Other

Known Issues

  • Other

    • A regression can cause 1.26.0 to repeatedly error log and fail to start queries in cases where the list of hosts in the cluster is not fixed. This is particularly likely to affect clusters running with ephemeral disks. The regression is fixed in 1.26.1.

Humio Server 1.26.2 LTS (2021-06-07)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.26.2LTS2021-06-07

Cloud

2022-05-31No1.16.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.26.0, 1.26.1

Several fixes related to the WorldMap and TimeChart widgets, OIDC group synchronization, and requirements for Humio trial installations, as well as privacy notices and terms and conditions, and other bugs.

Fixed in this release

  • Summary

    • Fix an issue where data was not visible on the World Map until the opacity setting had been changed.

    • Fix an issue when some user-defined styles weren't being applied to a chart after a page refresh or when exported to a dashboard widget.

    • Update the minimum Humio version for Hosts in global when downgrading a node

    • Fixes an OIDC group synchronization issue where users where denied access even though their group membership gave them access.

    • Fixes issue where the world map widget would misbehave in different ways.

    • Fixes an issue in Timechart with horizontal line not showing up.

    • Fix an issue where optimizeAndSaveQueryCoordinationPartitions could attempt to save a partitioning table to global with gaps in the partition list. This caused queries to fail, and repeated logging of a validation error.

    • Fix a number of cases where Humio could attempt to write a message to global larger than permitted.

    • All users (including existing users) need to accept the privacy notice and terms and conditions before using Humio.

    • Humio trial installations requires a trial license from this version, to request a trial license go to getting-started https://www.humio.com/getting-started/

  • Other

Known Issues

  • Other

    • A regression can cause 1.26.0 to repeatedly error log and fail to start queries in cases where the list of hosts in the cluster is not fixed. This is particularly likely to affect clusters running with ephemeral disks. The regression is fixed in 1.26.1.

Humio Server 1.26.1 LTS (2021-05-31)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.26.1LTS2021-05-31

Cloud

2022-05-31No1.16.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.26.0

Several fixes related to WorldMap widget, applying user-defined styles to a dashboard chart, and partitions.

Fixed in this release

  • Summary

    • Fix an issue where data was not visible on the World Map until the opacity setting had been changed.

    • Fix an issue when some user-defined styles weren't being applied to a chart after a page refresh or when exported to a dashboard widget.

    • Fix an issue where optimizeAndSaveQueryCoordinationPartitions could attempt to save a partitioning table to global with gaps in the partition list. This caused queries to fail, and repeated logging of a validation error.

  • Other

Known Issues

  • Other

    • A regression can cause 1.26.0 to repeatedly error log and fail to start queries in cases where the list of hosts in the cluster is not fixed. This is particularly likely to affect clusters running with ephemeral disks. The regression is fixed in 1.26.1.

Humio Server 1.26.0 LTS (2021-05-20)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.26.0LTS2021-05-20

Cloud

2022-05-31No1.16.0No

Hide file hashes

Show file hashes

The HEC ingest endpoint will no longer implicitly parse logs using the built-in kv parser. Previously, a log ingested using this endpoint would implicitly be parsed with the kv parser when the supplied event field was given as a string. For instance, this log:

json
{
  "time": 1537537729.0,
  "event": "Fri, 21 Sep 2018 13:48:49 GMT - system started name=webserver",
  "source": "/var/log/application.log",
  "sourcetype": "applog",
  "fields": { "#env": "prod" }
}

Would be parsed, so that the resulting Humio event would contain the field name=webserver.

If you don't wish this behavior to change, you will have to perform this parsing operation explicitly.

When ingesting into the HEC endpoint, you are using an ingest token to authenticate with Humio. If that token does not have an associated parser, all you need to do is assign the kv parser to the token.

If your ingest token already has an assigned parser, you will need to prepend the code of that parser with this code snippet:

kvParse(@rawstring) | findTimestamp(addErrors=false) |

Dark Mode is a new visual theme throughout Humio (except some settings pages) that is tailored to offer great readability in dark environments, to not brighten the entire room when used on dashboards, and offer a unique visual style that some users prefer simply for its aesthetics. In 1.25 users are going to see a modal dialogue that asks what mode users would like to have; dark mode, light mode or follow the OS theme. This setting can later be changed in the settings menu.

Fixed in this release

Known Issues

  • Other

    • A regression can cause 1.26.0 to repeatedly error log and fail to start queries in cases where the list of hosts in the cluster is not fixed. This is particularly likely to affect clusters running with ephemeral disks. The regression is fixed in 1.26.1.

Humio Server 1.25.3 GA (2021-05-10)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.25.3GA2021-05-10

Cloud

2022-05-31No1.16.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Minor bug fixes, including removing error logs from alert jobs running in a Sandbox.

Fixed in this release

  • Summary

    • Minor bug fixes and improvements.

  • Other

    • Removes error logs from the alert job when running alerts on a sandbox repository.

Humio Server 1.25.2 GA (2021-05-06)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.25.2GA2021-05-06

Cloud

2022-05-31No1.16.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Bug fix related to global consistency checks with nodes.

Fixed in this release

  • Summary

    • Fixes problem where having many nodes and a large global could lead to deadlocks in the global consistency check.

Humio Server 1.25.1 GA (2021-05-04)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.25.1GA2021-05-04

Cloud

2022-05-31No1.16.0No

Available for download two days after release.

Hide file hashes

Show file hashes

There is a serious issue affecting larger clusters in this release. The global inconsistency checker job can cause the thread responsible for reading changes from global to hang. It is possible to work around this by disabling the job using RUN_GLOBAL_CONSISTENCY_CHECKER_JOB=false. This is fixed in 1.25.2.

Fixed in this release

  • Other

    • Makes disabled items in the main menu look disabled in dark mode

Humio Server 1.25.0 GA (2021-04-29)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.25.0GA2021-04-29

Cloud

2022-05-31No1.16.0No

Available for download two days after release.

Hide file hashes

Show file hashes

There is a serious issue affecting larger clusters in this release. The global inconsistency checker job can cause the thread responsible for reading changes from global to hang. It is possible to work around this by disabling the job using RUN_GLOBAL_CONSISTENCY_CHECKER_JOB=false. This is fixed in 1.25.2 (and 1.26.0).

The HEC ingest endpoint will no longer implicitly parse logs using the built-in kv parser. Previously, a log ingested using this endpoint would implicitly be parsed with the kv parser when the supplied event field was given as a string. For instance, this log:

json
{
  "time": 1537537729.0,
  "event": "Fri, 21 Sep 2018 13:48:49 GMT - system started name=webserver",
  "source": "/var/log/application.log",
  "sourcetype": "applog",
  "fields": { "#env": "prod" }
}

Would be parsed, so that the resulting Humio event would contain the field name=webserver.

If you don't wish this behavior to change, you will have to perform this parsing operation explicitly.

When ingesting into the HEC endpoint, you are using an ingest token to authenticate with Humio. If that token does not have an associated parser, all you need to do is assign the kv parser to the token.

If your ingest token already has an assigned parser, you will need to prepend the code of that parser with this code snippet:

kvParse(@rawstring) | findTimestamp(addErrors=false) |

Dark Mode is a new visual theme throughout Humio (except some settings pages) that is tailored to offer great readability in dark environments, to not brighten the entire room when used on dashboards, and offer a unique visual style that some users prefer simply for its aesthetics. In 1.25 users are going to see a modal dialogue that asks what mode users would like to have; dark mode, light mode or follow the OS theme. This setting can later be changed in the settings menu.

New features and improvements

  • Other

    • The query scheduler now prioritizes new queries started by a user based on the cumulative cost of recent queries started by that user. Added new configuration QUERY_SPENT_FACTOR with the default value 0.5, which defines the weight of recent query costs when scheduling. Higher values mean that users with high recent query costs will see their queries penalized harder in the scheduling.

Fixed in this release

  • Automation and Alerts

    • Refreshing actions while creating alerts and scheduled searches now happens automatically, but can also be triggered manually using a button.

    • When running alerts and scheduled searches, all logging related to a specific alert or scheduled search will now be logged to the System Repositories repository, instead of the humio repository. Error logs will still be logged to the humio repository as well.

  • GraphQL API

    • The SearchDomain.viewerCanChangeConnections GraphQL field has been deprecated. Use SearchDomain.isActionAllowed instead.

    • Deprecates GraphQL fields UserSettings.isEventListOrderChangedMessageDismissed, UserSettings.isNewRepoHelpDismissed, and UserSettings.settings since they are not used for anything anymore, and will be removed in a future release.

    • Removes the deprecated Repository.isFreemium GraphQL field.

    • The updateSettings GraphQL mutation has been marked as unstable, as it can control unstable and ephemeral settings.

    • The SearchDomain.queries GraphQL field has been deprecated. Use SearchDomain.savedQueries instead.

  • Configuration

    • Removed the QUERY_QUOTA_EXCEEDED_PENALTY configuration.

    • SEGMENTMOVER_EXECUTOR_CORES allows tuning number of concurrent fetches of segments from other nodes to this node. Defaults to vCPU/8, must be at least 2.

    • S3_ARCHIVING_IBM_COMPAT for compatility with S3 archiving to IBM Cloud Object Storage.

  • Ingestion

    • Added audit logging when assigning a parser to an ingest token or unassigning a parser from an ingest token. Added the parser name to all audit logs for ingest tokens.

  • Functions

    • Make the parseLEEF() function more robust and optimize its memory usage.

    • Fixed a bug which could cause head(), tail(), sort() within either bucket() or a live query to return too few results in certain cases.

    • Optimized the splitString() function.

    • Added a new query function: base64Decode().

    • Fixed a bug where cidr() did not respect the include parameter.

  • Other

    • Added documentation link to autocomplete description in the Humio search field

    • Added new parameters handleNull and excludeEmpty to parseJson() to control how null and empty string values are handled.

    • When installing an application package, you sometimes had to refresh the page to get the assets in the package linked to their installed counter parts.

    • Added IP ASN Database license information to the Cluster Administration page

    • Added a warning to the Cluster Nodes page that warns you if not all Humio servers are running the same Humio version.

    • Some minor performance improvements in the ingest pipeline

    • Rework how Humio caches data from global. This fixes a number of data races, where Humio nodes could temporarily get an incorrect view of global.

    • Improved error logging for event forwarding

    • Fixed a bug that made it possible to rename a parser to an existing name and thereby overwriting the existing parser.

    • Changed the built-in audit-log parser so that null values are stored as an empty string value. Previously, they were stored as the string "null". The defaults are consistent with the old behavior, so that null values become a "null" string and empty string values are kept.

    • Bumped minimum supported versions of Chrome and Chromium from 60 to 69 due to updated dependencies

    • Allow user groups to be represented as a json string and not only as an array when logging in with OAuth.

    • Query poll responses meta data now include Query Quota spent for current user across queries. The cost so far of the current query was there already.

    • Made it possible to delete a parser overriding a built-in parser, even though it is used in an ingest token.

    • Reworked initialization of Humio's async listener infrastructure, to ensure that listeners do not miss any updates. This fixes a number of flakiness issues that could arise when a node was rebooted.

    • The HEC ingest endpoint is no longer implicitly running kvParse. This used to be the case when ingesting events of the form "event" : "Log line...". If the ingested data is to be key-value parsed, add kvParse() to the relevant parser for the input data.

    • When a query is cancelled, a reason for canceling the query is now always logged. Previously, this was only done if the query was cancelled due to an internal exception. Look for log lines starting with query is cancelled.

    • Fixed an issue where clicking the label of a parser rerouted erroneously

    • Fixed a bug that made it impossible to copy a parser to override a built-in parser.

    • Fixed a bug where a scheduled search would be executed repeatedly, as long as at least one out of multiple actions was failing. Now, execution is only repeated if all actions are failing.

Humio Server 1.24.4 LTS (2021-05-31)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.24.4LTS2021-05-31

Cloud

2022-04-30No1.16.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.24.0, 1.24.1, 1.24.2, 1.24.3

Minor bug fixes, as well as fix a stackoverflow bug in large clusters.

Fixed in this release

  • Summary

    • Minor bug fixes and improvements.

    • Minor bug fixes and improvements.

    • Fix a stackoverflow that could occur during startup in larger clusters.

  • Other

    • Removed the QUERY_QUOTA_EXCEEDED_PENALTY config (introduced in 1.19.0).

    • Fixed an issue on the search page that prevented the event list from scrolling correctly.

    • Fixed a bug where ingestOnly nodes could not start on a more recent version that the statefull nodes in the cluster

    • Fixed an issue which prevented Safari users from seeing alert actions

    • Fixed an issue where cost spent in a long-running query got accounted as spent "now" when the query ended in terms of Query Quota

    • Fixed an issue which caused problems with forward/backward compatibility of LanguageVersion in GraphQL

    • Major changes: 1.23.0 and 1.23.1.

    • Fixed an issue where a repository with very high number of datasources could trigger an error writing an oversized message to kafka from the ingest-reader-leader thread

    • The query scheduler now prioritizes new queries started by a user based on the cumulative cost of recent queries started by that user. Added new config QUERY_SPENT_FACTOR with the default value 0.5, which defines the weight of recent query costs when scheduling. Higher values mean that users with high recent query costs will see their queries penalized harder in the scheduling.

    • Ensure that if the Kafka leader loop thread dies, it kills the Humio process. In rare cases it was possible for this thread to die, leaving the node incapable of performing digest work

    • Fixes an issue where the user would get stuck in infinite loading after having been invited into an organization

    • Fixed a scrolling issues on the Kafka cluster admin page

    • Allow reverse proxies using 10s as timeout to work also for a query that takes longer than that to initialize

    • Reduced off-heap memory usage

Humio Server 1.24.3 LTS (2021-05-10)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.24.3LTS2021-05-10

Cloud

2022-04-30No1.16.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.24.0, 1.24.1, 1.24.2

Minor bug fixes.

Fixed in this release

  • Summary

    • Minor bug fixes and improvements.

  • Other

    • Removed the QUERY_QUOTA_EXCEEDED_PENALTY config (introduced in 1.19.0).

    • Fixed an issue on the search page that prevented the event list from scrolling correctly.

    • Fixed a bug where ingestOnly nodes could not start on a more recent version that the statefull nodes in the cluster

    • Fixed an issue which prevented Safari users from seeing alert actions

    • Fixed an issue where cost spent in a long-running query got accounted as spent "now" when the query ended in terms of Query Quota

    • Fixed an issue which caused problems with forward/backward compatibility of LanguageVersion in GraphQL

    • Major changes: 1.23.0 and 1.23.1.

    • Fixed an issue where a repository with very high number of datasources could trigger an error writing an oversized message to kafka from the ingest-reader-leader thread

    • The query scheduler now prioritizes new queries started by a user based on the cumulative cost of recent queries started by that user. Added new config QUERY_SPENT_FACTOR with the default value 0.5, which defines the weight of recent query costs when scheduling. Higher values mean that users with high recent query costs will see their queries penalized harder in the scheduling.

    • Ensure that if the Kafka leader loop thread dies, it kills the Humio process. In rare cases it was possible for this thread to die, leaving the node incapable of performing digest work

    • Fixes an issue where the user would get stuck in infinite loading after having been invited into an organization

    • Fixed a scrolling issues on the Kafka cluster admin page

    • Allow reverse proxies using 10s as timeout to work also for a query that takes longer than that to initialize

    • Reduced off-heap memory usage

Humio Server 1.24.2 LTS (2021-04-19)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.24.2LTS2021-04-19

Cloud

2022-04-30No1.16.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.24.0, 1.24.1

Important Information about Upgrading

Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.24.1 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded at least to 1.16.0 before trying to upgrade to 1.24.1. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer. Rolling directly back to an earlier release can result in data loss.

Fixed in this release

  • Other

    • Removed the QUERY_QUOTA_EXCEEDED_PENALTY config (introduced in 1.19.0).

    • Fixed an issue on the search page that prevented the event list from scrolling correctly.

    • Fixed a bug where ingestOnly nodes could not start on a more recent version that the statefull nodes in the cluster

    • Fixed an issue which prevented Safari users from seeing alert actions

    • Fixed an issue where cost spent in a long-running query got accounted as spent "now" when the query ended in terms of Query Quota

    • Fixed an issue which caused problems with forward/backward compatibility of LanguageVersion in GraphQL

    • Major changes: 1.23.0 and 1.23.1.

    • Fixed an issue where a repository with very high number of datasources could trigger an error writing an oversized message to kafka from the ingest-reader-leader thread

    • The query scheduler now prioritizes new queries started by a user based on the cumulative cost of recent queries started by that user. Added new config QUERY_SPENT_FACTOR with the default value 0.5, which defines the weight of recent query costs when scheduling. Higher values mean that users with high recent query costs will see their queries penalized harder in the scheduling.

    • Ensure that if the Kafka leader loop thread dies, it kills the Humio process. In rare cases it was possible for this thread to die, leaving the node incapable of performing digest work

    • Fixes an issue where the user would get stuck in infinite loading after having been invited into an organization

    • Fixed a scrolling issues on the Kafka cluster admin page

    • Allow reverse proxies using 10s as timeout to work also for a query that takes longer than that to initialize

    • Reduced off-heap memory usage

Humio Server 1.24.1 LTS (2021-04-12)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.24.1LTS2021-04-12

Cloud

2022-04-30No1.16.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.24.0

Important Information about Upgrading

Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.24.1 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded at least to 1.16.0 before trying to upgrade to 1.24.1. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer. Rolling directly back to an earlier release can result in data loss.

Fixed in this release

  • Other

    • Removed the QUERY_QUOTA_EXCEEDED_PENALTY config (introduced in 1.19.0).

    • Fixed an issue on the search page that prevented the event list from scrolling correctly.

    • Fixed a bug where ingestOnly nodes could not start on a more recent version that the statefull nodes in the cluster

    • Fixed an issue which prevented Safari users from seeing alert actions

    • Fixed an issue which caused problems with forward/backward compatibility of LanguageVersion in GraphQL

    • Major changes: 1.23.0 and 1.23.1.

    • The query scheduler now prioritizes new queries started by a user based on the cumulative cost of recent queries started by that user. Added new config QUERY_SPENT_FACTOR with the default value 0.5, which defines the weight of recent query costs when scheduling. Higher values mean that users with high recent query costs will see their queries penalized harder in the scheduling.

    • Ensure that if the Kafka leader loop thread dies, it kills the Humio process. In rare cases it was possible for this thread to die, leaving the node incapable of performing digest work

    • Fixes an issue where the user would get stuck in infinite loading after having been invited into an organization

    • Fixed a scrolling issues on the Kafka cluster admin page

    • Allow reverse proxies using 10s as timeout to work also for a query that takes longer than that to initialize

Humio Server 1.24.0 LTS (2021-04-06)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.24.0LTS2021-04-06

Cloud

2022-04-30No1.16.0No

Hide file hashes

Show file hashes

Important Information about Upgrading

This release promotes the latest 1.23 release from preview to stable.

Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.24.0 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded at least to 1.16.0 before trying to upgrade to 1.24.0. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer. Rolling directly back to an earlier release can result in data loss.

Humio will make some internal logs available in a new repository called humio-activity. This is meant for logs that are relevant to users of Humio, as compared to logs that are only relevant for operators. The latter logs are still put into the humio repository. For this release, only new log events will be put into humio-activity, but in later releases, some existing log events that are relevant for users, will be put into the humio-activity repository instead of the humio repository.

For cloud users, the logs for your organization can be accessed through the humio-organization-activity view.

For on-prem users, the logs can be accessed directly through the humio-activity repository. They are also output into a new log file named humio-activity.log which can be ingested into the humio repository, if you want it available there as well. If you do and you are using the Insights Package, you should upgrade that to version 0.0.4. For more information, see the LogScale Internal Logging.

Humio has decided to adopt an evolutionary approach to its GraphQL API, meaning that we will strive to do only backwards compatible changes. Instead of making non-backwards compatible changes to existing fields, we will instead add new fields alongside the existing fields. The existing fields will be deprecated and might be removed in some later release. We reserve the right to still do non-backwards compatible changes, for instance to fix security issues.

For new experimental features, we will mark the corresponding GraphQL fields as PREVIEW. There will be no guarantees on backwards compatibility on fields marked as PREVIEW.

Deprecated and preview fields and enum values will be marked as such in the GraphQL schema and will be shown as deprecated or preview in the API Explorer. Apart from that, the result of running a GraphQL query using a deprecated or preview field will contain a new field extensions, which contains a field deprecated with a list of all deprecated fields used in the query and a field preview with a list of all preview fields used in the query.

Example:

json
{
  "data": "...",
  "extensions": {
    "deprecated": [
      {
        "name": "alert",
        "reason": "[DEPRECATED: Since 2020-11-26. Deprecated since 1.19.0. Will be removed March 2021. Use 'searchDomain.alert' instead]"
      }
    ]
  }
}

Deprecated fields and enum values will also be noted in the release note, when they are first deprecated. All use of deprecated fields and enum values will also be logged in the Humio repository humio-activity. They will have #category=GraphQL, subCategory=Deprecation and #severity=Warning. If you are using the API, consider creating an alert for such logs.

Removed Support for CIDR Shorthand

Previous version of Humio supported a shorthand for IPv4 CIDR expressions. For example 127.1/16 would be equivalent to 127.1.0.0/16. This was contrary to other implementations like the Linux function inet_aton, where 127.1 expands to 127.0.0.1. Support for this shorthand has been removed and the complete address must now be written instead.

Fixed in this release

  • Other

    • Removed the QUERY_QUOTA_EXCEEDED_PENALTY config (introduced in 1.19.0).

    • Fixed an issue on the search page that prevented the event list from scrolling correctly.

    • Fixed an issue which prevented Safari users from seeing alert actions

    • Fixed an issue which caused problems with forward/backward compatibility of LanguageVersion in GraphQL

    • Major changes: 1.23.0 and 1.23.1.

    • The query scheduler now prioritizes new queries started by a user based on the cumulative cost of recent queries started by that user. Added new config QUERY_SPENT_FACTOR with the default value 0.5, which defines the weight of recent query costs when scheduling. Higher values mean that users with high recent query costs will see their queries penalized harder in the scheduling.

    • Fixed a scrolling issues on the Kafka cluster admin page

Humio Server 1.23.1 LTS (2021-03-24)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.23.1LTS2021-03-24

Cloud

2022-03-31No1.16.0No

Important Information about Upgrading

Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.23.1 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded at least to 1.16.0 before trying to upgrade to 1.23.1. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer. Rolling directly back to an earlier release can result in data loss.

Fixed in this release

  • Configuration

    • S3_ARCHIVING_IBM_COMPAT for compatibility with S3 archiving to IBM Cloud Object Storage.

  • Other

    • Allow users group to be represented as a json string and not only array when logging in with OAuth.

Humio Server 1.23.0 GA (2021-03-18)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.23.0GA2021-03-18

Cloud

2022-03-31No1.16.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Important Information about Upgrading

Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.23.0 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded at least to 1.16.0 before trying to upgrade to 1.23.0. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer. Rolling directly back to an earlier release can result in data loss.

Humio will make some internal logs available in a new repository called humio-activity. This is meant for logs that are relevant to users of Humio, as compared to logs that are only relevant for operators. The latter logs are still put into the humio repository. For this release, only new log events will be put into humio-activity, but in later releases, some existing log events that are relevant for users, will be put into the humio-activity repository instead of the humio repository.

For cloud users, the logs for your organization can be accessed through the humio-organization-activity view.

For on-prem users, the logs can be accessed directly through the humio-activity repository. They are also output into a new log file named humio-activity.log which can be ingested into the humio repository, if you want it available there as well. If you do and you are using the Insights Package, you should upgrade that to version 0.0.4. For more information, see LogScale Internal Logging.

Humio has decided to adopt an evolutionary approach to its GraphQL API, meaning that we will strive to do only backwards compatible changes. Instead of making non-backwards compatible changes to existing fields, we will instead add new fields alongside the existing fields. The existing fields will be deprecated and might be removed in some later release. We reserve the right to still do non-backwards compatible changes, for instance to fix security issues.

For new experimental features, we will mark the corresponding GraphQL fields as PREVIEW. There will be no guarantees on backwards compatibility on fields marked as PREVIEW.

Deprecated and preview fields and enum values will be marked as such in the GraphQL schema and will be shown as deprecated or preview in the API Explorer. Apart from that, the result of running a GraphQL query using a deprecated or preview field will contain a new field extensions, which contains a field deprecated with a list of all deprecated fields used in the query and a field preview with a list of all preview fields used in the query.

Example:

json
{
  "data": "...",
  "extensions": {
    "deprecated": [
      {
        "name": "alert",
        "reason": "[DEPRECATED: Since 2020-11-26. Deprecated since 1.19.0. Will be removed March 2021. Use 'searchDomain.alert' instead]"
      }
    ]
  }
}

Deprecated fields and enum values will also be noted in the release note, when they are first deprecated. All use of deprecated fields and enum values will also be logged in the Humio repository humio-activity. They will have #category=GraphQL, subCategory=Deprecation and #severity=Warning. If you are using the API, consider creating an alert for such logs.

Removed Support for CIDR Shorthand

Previous version of Humio supported a shorthand for IPv4 CIDR expressions. For example 127.1/16 would be equivalent to 127.1.0.0/16. This was contrary to other implementations like the Linux function inet_aton, where 127.1 expands to 127.0.0.1. Support for this shorthand has been removed and the complete address must now be written instead.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • Deprecated GraphQL mutations addAlertLabel, removeAlertLabel, addStarToAlert and removeStarFromAlert as they did not follow the standard for mutation input.

New features and improvements

Fixed in this release

  • Automation and Alerts

    • Restyled the alert dialogue.

    • Deprecated the REST endpoints for alerts and actions.

  • Functions

    • Deprecated file and column parameter on cidr(). Use match() with mode=cidr instead.

    • Fixed a bug which caused glob-patterns in match() to not match newline characters.

    • Negated, non-strict match() or lookup() is no longer allowed.

    • Added mode parameter to match(), allowing different ways to match the key.

    • Fixed a bug which caused tag-filters in anonymous functions to not work in certain cases (causing to many events to be let through).

    • Deprecated glob parameter on match(), use mode=glob instead.

    • Removed support for shorthand IPv4 CIDR notation in cidr(). See section "Removed support for CIDR shorthand".

    • Fixed a bug in event forwarding that made start(), end() and now() return the time at which the event forwarding rule was cached. Instead, now() will return the time at which the event forwarding rule was run. start() and end() were never meant to be used in an event forwarding rule and will return 0, which means Unix Epoch.

    • Fixed a bug which caused in() with values=[] to give incorrect results.

    • Added support for CIDR matching on match() using mode=cidr.

    • Improved performance when using match() with mode=cidr compared to using cidr() with file().

  • Other

    • Enforce permissions to enter Organization Settings page.

    • Added a new introduction message to empty repositories.

    • Fixed an issue which caused Ingesting Data to Multiple Repositories to break, when the parser used copyEvent to duplicate the input events into multiple repositories

    • Refactor how the width of the repository name in the main navigation bar is calculated.

    • Improved performance of free-text search using regular expressions.

    • The GraphQL API Explorer has been upgraded to a newer version. The new version includes a history of the queries that have been run.

    • Added an option to make it easier to diagnose problems by detecting inconsistencies between globals in different Humio instances. Each Humio instance has its own copy of the global state which must all be identical. It has happened that they were not, so now we check and if there is a difference we report an error and dump the global state into a file.

    • Allow turning encryption of files stored in bucket storage off by explicitly setting S3_STORAGE_ENCRYPTION_KEY=off (similar for GCP_ )

    • The GraphQL API Explorer is now available from inside Humio. You can access it using the Help->API Explorer menu.

    • Fixed the requirement condition for the time retention on a repository.

    • Removed the deprecated Repository.isFreemium GraphQL field.

    • Fixed a bug where the same regex pattern occurring multiple times in a query could cause incorrect results

    • Deprecated the ReadEvents enum variant from the ViewAction enum in GraphQL. Use the ReadContents variant instead, which has the same semantics, but a more accurate name. ReadEvents will be removed in a future release.

    • UI enhancements for the new repository Access Permissions page.

    • Fixed an issue where changes to files would not propagate to parsers or event forwarders.

    • Fixed an issue causing undersized segment merging to repeatedly fetch the same segments, in cases where the merger job took too long to finish.

    • Fixed an issue where Prometheus metrics always reported 0.0 for humio_primary_disk_usage

    • Enforce permissions to enter creating new repository page.

    • Refactor Organization Overview page.

    • Fixed a bug which caused match() to give incorrect results in certain cases due to incorrect caching

    • Fixes a bug where events deleted with the delete-event API would appear deleted at first, but then resurface again after 24h. If user applying delete did not have permission to search the events being deleted.

    • Made the S3 archiving save button work again.

    • Changed the URL of the Kafka cluster page in the settings.

    • Enforce accepting terms and conditions.

    • Improved memory use for certain numerical aggregrating functions

    • Fixed an issue where regular expressions too large to handle would sometimes cause the query to hang. Now we report an error.

    • The SearchDomain.queries GraphQL field has been deprecated, and will be removed in a future release. Use SearchDomain.savedQueries instead.

    • Refactor All Organizations page.

    • Added IP Filter for readonly dashboard links, and started to audit log readonly dashboard access. In this initial version. The readonly ip filter can be configured with the graphql mutation:

      graphql
      mutation {
        updateReadonlyDashboardIPFilter(ipFilter: "FILTER")
      }

      The FILTER is expected in this format: IP Filter. From Humio 1.25 this can be configured in the configuration UI.

    • Mark required fields on the Accept Terms and Conditions page.

    • Fixed an issue with the Missing Segments API that caused missing segments to not appear in the missing segments list if they had a replacement segment.

    • Refactor client side action cache of allowed permissions.

    • Implemented toggle button for dark mode.

    • It is again possible to sort the events on the test parser page.

    • The SearchDomain.viewerCanChangeConnections GraphQL field has been deprecated, and will be removed in a future release. Use SearchDomain.isActionAllowed instead.

Humio Server 1.22.1 LTS (2021-03-02)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.22.1LTS2021-03-02

Cloud

2022-03-31No1.16.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.22.0

Important Information about Upgrading

Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.22.1 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded at least to 1.16.0 before trying to upgrade to 1.22.1. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer. Rolling directly back to an earlier release can result in data loss.

Fixed in this release

  • Other

    • Restrict concurrency when mirroring uploaded files within the cluster

    • Fixed issue where tag filters in anonymous parts within an aggregate did not get applied

    • Fix issue where updating account settings would present the user with an error even though the update was successful

    • Change log lines creating a 'kind' field. Kind is used as a tag for the different humio logs

    • Add the "ProxyOrganization" header to the list of general auth headers used on REST calls

    • Fixed issue where local segment files would not get deleted in time, potentially filling the disk

    • Major changes: (see version 1.21.0 and 1.21.1 release notes)

    • Fixed issue where root users were not allowed to set unlimited time in retention settings

    • Fixes overflowing editor bug.

    • Increase HTTP chunk size from 16MB to 128MB

    • Fixes parserlist having no height on Safari.

    • Fixes problem where alertpages have no height in Safari.

Humio Server 1.22.0 LTS (2021-03-02)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.22.0LTS2021-03-02

Cloud

2022-03-31No1.16.0Yes

Hide file hashes

Show file hashes

Important Information about Upgrading

This release promotes the latest 1.21 release from preview to stable.

Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.22.0 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded at least to 1.16.0 before trying to upgrade to 1.22.0. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer. Rolling directly back to an earlier release can result in data loss.

UI revamp In this version the UI has been given a complete makeover.

Fixed in this release

  • Other

    • Restrict concurrency when mirroring uploaded files within the cluster

    • Fixed issue where tag filters in anonymous parts within an aggregate did not get applied

    • Fix issue where updating account settings would present the user with an error even though the update was successful

    • Change log lines creating a 'kind' field. Kind is used as a tag for the different humio logs

    • Add the "ProxyOrganization" header to the list of general auth headers used on REST calls

    • Fixed issue where local segment files would not get deleted in time, potentially filling the disk

    • Major changes: (see version 1.21.0 and 1.21.1 release notes)

    • Fixed issue where root users were not allowed to set unlimited time in retention settings

    • Increase HTTP chunk size from 16MB to 128MB

Humio Server 1.21.1 GA (2021-02-23)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.21.1GA2021-02-23

Cloud

2022-03-31No1.16.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Important Information about Upgrading

Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.21.1 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded at least to 1.16.0 before trying to upgrade to 1.21.1. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer. Rolling directly back to an earlier release can result in data loss.

Fixed in this release

  • Other

    • New "prefetch from bucket" job. When a node starts with an empty disk it will download a relevant subset of segment files from the bucket in order to have them present locally for queries.

    • Server: header in responses from from Humio HTTP server now includes (Vhost, NodeRole) after the version string.

    • Improve performance of "decrypt step" in downloads from bucket storage

Humio Server 1.21.0 GA (2021-02-22)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.21.0GA2021-02-22

Cloud

2022-03-31No1.16.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Important Information about Upgrading

Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.21.0 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded at least to 1.16.0 before trying to upgrade to 1.21.0. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer. Rolling directly back to an earlier release can result in data loss.

Removed

Items that have been removed as of this release.

Other

  • The deprecated built-in parser bro-json has been deleted. It has been replaced by the parser zeek-json.

  • The deprecated built-in parser json-for-notifier has been deleted. It has been replaced by the parser json-for-action.

Fixed in this release

  • Automation and Alerts

    • Create, update and delete of an alert, scheduled search or action is now recorded in the audit log.

  • Functions

    • Fixed a bug in lowercase() which caused the case lowercase(field="\*", include="values") to not process all fields but only the field named "\*".

    • Fixed a bug which caused validation to miss rejecting window() inside window() and session().

    • subnet() now reports an error if its argument bits is outside the range 0 to 32.

    • The replace() function now reports an error if the arguments replacement and with are provided at the same time.

    • The split() function no longer adds a @display field to the event it outputs.

    • The replace() function now reports an error if an unsupported flag is provided in the flags argument.

    • Change handling of groupBy() in live-queries which should in many cases reduce memory cost.

    • The functions worldMap() and geohash() now generated errors if the requested precision is greater than 12.

    • Fixed a memory leak in rdns() in cases where many different name servers are used.

    • Fixed a bug which caused eventInternals() to crash if used late in the pipeline.

    • The transpose() function now reports an error if the arguments header or column is provided together with the argument pivot.

    • Fixed bugs in format() which caused output from %e and %g to be incorrect in certain cases.

    • Fixed a performance and a robustness problem with the function unit:convert(). The formatting of the numbers in its output may in some cases be different now.

    • The findTimestamp() function has been changed, so that it no longer has a default value for the timezone parameter. Previously, the default was UTC. If no timezone argument is supplied to the function, it will not parse timestamps that do not contain a timezone. To get the old functionality, simply add timezone=UTC to the function. This can be done before upgrading to this release.

    • The experimental function moment() has been removed.

  • Other

    • Humio insights package installed if missing on the humio view when humio is started.

    • Fixed an issue causing event redirection to break when using copyEvent to get the same events ingested into multiple repositories.

    • Raised the note widget text length limit to .00.

    • kvParse() now unescapes backslashes when they're inside (' or ") quotes.

    • Fixed an issue where repeating queries would not validate in alerts.

    • Make the thread dump job run on a dedicated thread, rather than running on the thread pool shared with other jobs.

    • Fixed an issue with lack of escaping in filename when downloading.

    • Running test of a parser is no longer recorded in the audit log, and irrelevant fields are no longer recorded upon parser deletion.

    • Made loggings for running alerts more consistent and more structured. All loggings regarding a specific alert will contain the keys alertId, alertName and viewId. Loggings regarding the alert query will always contain the key externalQueryId and sometimes also the keys queryId with the internal id and query with the actual query string. If there are problems with the run-as-user, the id of that user is logged with the key user.

    • Fixed a bug where analysis of a regex could consume extreme amounts of memory.

    • Raised the parser test character length limit to .00.

    • Fixed an issue where the segment mover might schedule too many segments for transfer at a time.

    • Fixed a number of potential concurrency issues.

    • Fixed an issue causing Humio to crash when attempting to delete an idle empty datasource right as the datasource receives new data.

    • Made sure the humio view humio default parser is only installed when missing, instead of overwriting it every time humio starts.

    • Improve number formatting in certain places by being better at removing trailing zeros.

    • Lowered the severity level for some loggings for running alerts.

    • Fixed a bug where referenced saved queries were not referenced correctly after exporting them as part of a package.

    • kvParse() now also unescapes single quotes. (')

    • Improve hit rate of query state cache by allowing similar but not identical queries to share cache when the entry in the cache can form the basis for both. The cache format is incompatible with previous versions, this is handled internally by handling incompatible cache entries as cache misses.

    • Fixed a bug which could cause saving of query state cache to take a rather long time.

    • The default parser kv has been changed from using the parseemacs vTimestamp() function to use the findTimestamp() function. This will make it able to parse more timestamp formats. It will still only parse timestamps with a timezone. It also no longer adds a timezone field with the extracted timestamp string. This was only done for parsing the timestamp and not meant for storing on the event. To keep the old functionality, clone the kv parser in the relevant repositories and store the cloned parser with the name kv. This can be done before upgrading to this release. See kv.

    • Fixed a bug in parseJson() which resulted in failed JSON parsing if an object contained an empty key ("").

    • Fixed an issue with the validation of the query prefix set on a view for each repository within the view: Invoking macros is not allowed and was correctly rejected when creating a view, but was not rejected when editing an existing connection.

    • Fixed a bug which could potentially cause a query state cache file to be read in an incomplete state.

    • Improve performance of writeJson() a bit.

    • When using filters on dashboards, you can now easily reset the filter, either removing it completely, or using the default filter if one is present.

    • Prevent Humio from booting when ZooKeeper has been reset but Kafka has not.

    • Fixed an issue causing segment tombstones to potentially be deleted too early if bucket storage is enabled, causing an error log.

    • Made loggings for running scheduled searches more consistent and more structured. All loggings regarding a specific alert will contain the keys scheduledSearchId, scheduledSearchName and viewId. Loggings regarding the alert query will always contain the key externalQueryId and sometimes also the keys queryId with the internal id and query with the actual query string. If there are problems with the run-as-user, the id of that user is logged with the key user.

    • Fixed an issue where cancelled queries could be cached.

    • Fixed a bug in upper() and lower() which could cause its output to be corrupted (in cases where no characters had been changed).

    • Fixed an issue where merge of segments were reported as failed due to input files being deleted while merging. This is not an error, and is no longer reported as such.

    • kvParse() now only unescapes quotes and backslashes that are inside a quoted string.

    • Added support for disaster recovery of a cluster where all nodes including Kafka has been lost, restoring the state present in bucket storage as a fresh cluster using the old bucket as read-only, and forming a fresh cluster from that. New Configs: S3_RECOVER_FROM_REPLACE_REGION and S3_RECOVER_FROM_REPLACE_BUCKET to allow modifying names of region/bucket while recovering to allow running on a replica, specifying read-only source using S3_RECOVER_FROM* for all the bucket storage target parameters otherwise named S3_STORAGE*

    • When using ephemeral disks on nodes are being replaced with new ones on empty disks no longer download most of the segments they had before being replaced, but instead schedule downloads based on is being searched.

    • The Auth0 login page will no longer load a local version of the Auth-Lock library, but instead load a login script hosted on Auth0's CDN. This may require opening access to https://cdn.auth0.com/ if hosting Humio behind a firewall.

  • Packages

    • When exporting a package, you now get a preview of the icon you've added for the package.

    • Packages can now be updated with the same version but new content. This makes iterating over a package before finalizing it easier.

Humio Server 1.20.4 LTS (2021-02-22)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.20.4LTS2021-02-22

Cloud

2022-01-31No1.16.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.20.0, 1.20.1, 1.20.2, 1.20.3

Important Information about Upgrading

Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.20.4 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded at least to 1.16.0 before trying to upgrade to 1.20.4. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer. Rolling directly back to an earlier release can result in data loss.

Fixed in this release

  • Functions

    • Fixed a bug in upper() and lower() functions which could cause its output to be corrupted (in cases where no characters had been changed).

    • Fix newline glob handling in in() and match() functions.

  • Other

    • Fixed an issue causing event redirection to break when using copyEvent() function to get the same events forwarded to multiple repositories.

    • Fixed a bug that exporting a package with dashboard parameters would not set the correct name space for a saved query called in a parameter.

    • Fixed a bug that exporting a package using a saved query with spaces in the name would not export the correct name.

    • New "prefetch from bucket" job - When a node starts with an empty disk it will download a relevant subset of segment files from the bucket in order to have them present locally for queries.

    • Minor fix to Humio internal JSON logging when using the configuration HUMIO_LOG4J_CONFIGURATION=log4j2-json-stdout.xml.

    • Fixed an issue where cloning a dashboard or parser would clone the wrong entity.

    • Enable Package marketplace (in beta)

    • Query scheduling now tracks cost spent across queries for each user and tends to select next task so that users (rather than queries) each get a fair share of available CPU time.

    • Segment download timeout raised from 120s to 1.0s. Avoids situations where large segments could not be moved around a cluster.

    • Reduce triggering of auto completion in the query editor.

    • Fixed an issue where some parts of regexes did not shown in the parser editor.

    • Improve query cache hit rate by not starting queries locally when the preferred nodes are down, if the local node has just started — as there is a fair chance the preferred nodes will show up shortly too.

    • Fixed a bug in parseJson() which resulted in failed JSON parsing if an object contained an empty key ("").

    • Connecting to the Packages now respects the Humio proxy configuration

    • Improve auto completion suggestions.

    • Fixed an issue causing the secondary storage transfer job to plan more segments for transfer than necessary.

    • Handle inconsistencies in the global entities file gracefully rather than crashing.

    • Fixed a bug where merged segments could grow too large if the source events were large.

    • Major changes (see version 1.19.0 release notes)

    • Fixed a bug where fields panel was not scrollable in Safari.

Humio Server 1.20.3 LTS (2021-02-11)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.20.3LTS2021-02-11

Cloud

2022-01-31No1.16.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.20.0, 1.20.1, 1.20.2

Important Information about Upgrading

Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.20.3 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded at least to 1.16.0 before trying to upgrade to 1.20.3. In case you need to do a rollback, this can also only happen back to 1.16.0 or newer. Rolling directly back to an earlier release can result in data loss.

Fixed in this release

  • Functions

    • Fixed a bug in upper() and lower() functions which could cause its output to be corrupted (in cases where no characters had been changed).

  • Other

    • Fixed a bug that exporting a package with dashboard parameters would not set the correct name space for a saved query called in a parameter.

    • Fixed a bug that exporting a package using a saved query with spaces in the name would not export the correct name.

    • Minor fix to Humio internal JSON logging when using the configuration HUMIO_LOG4J_CONFIGURATION=log4j2-json-stdout.xml.

    • Fixed an issue where cloning a dashboard or parser would clone the wrong entity.

    • Enable Package marketplace (in beta)

    • Query scheduling now tracks cost spent across queries for each user and tends to select next task so that users (rather than queries) each get a fair share of available CPU time.

    • Segment download timeout raised from 120s to 1.0s. Avoids situations where large segments could not be moved around a cluster.

    • Reduce triggering of auto completion in the query editor.

    • Fixed an issue where some parts of regexes did not shown in the parser editor.

    • Improve query cache hit rate by not starting queries locally when the preferred nodes are down, if the local node has just started — as there is a fair chance the preferred nodes will show up shortly too.

    • Fixed a bug in parseJson() which resulted in failed JSON parsing if an object contained an empty key ("").

    • Improve auto completion suggestions.

    • Fixed an issue causing the secondary storage transfer job to plan more segments for transfer than necessary.

    • Handle inconsistencies in the global entities file gracefully rather than crashing.

    • Fixed a bug where merged segments could grow too large if the source events were large.

    • Major changes (see version 1.19.0 release notes)

    • Fixed a bug where fields panel was not scrollable in Safari.

Humio Server 1.20.2 LTS (2021-02-11)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.20.2LTS2021-02-11

Cloud

2022-01-31No1.16.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.20.0, 1.20.1

Important Information about Upgrading

Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.20.2 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded at least to 1.16.0 before trying to upgrade to 1.20.2. In case you need to do a rollback, this can also only happen back to 1.16.0 or newer. Rolling directly back to an earlier release can result in data loss.

Fixed in this release

  • Functions

    • Fixed a bug in upper() and lower() functions which could cause its output to be corrupted (in cases where no characters had been changed).

  • Other

    • Fixed a bug that exporting a package with dashboard parameters would not set the correct name space for a saved query called in a parameter.

    • Fixed a bug that exporting a package using a saved query with spaces in the name would not export the correct name.

    • Minor fix to Humio internal JSON logging when using the configuration HUMIO_LOG4J_CONFIGURATION=log4j2-json-stdout.xml.

    • Fixed an issue where cloning a dashboard or parser would clone the wrong entity.

    • Enable Package marketplace (in beta)

    • Query scheduling now tracks cost spent across queries for each user and tends to select next task so that users (rather than queries) each get a fair share of available CPU time.

    • Segment download timeout raised from 120s to 1.0s. Avoids situations where large segments could not be moved around a cluster.

    • Reduce triggering of auto completion in the query editor.

    • Fixed an issue where some parts of regexes did not shown in the parser editor.

    • Improve query cache hit rate by not starting queries locally when the preferred nodes are down, if the local node has just started — as there is a fair chance the preferred nodes will show up shortly too.

    • Fixed a bug in parseJson() which resulted in failed JSON parsing if an object contained an empty key ("").

    • Fixed an issue causing the secondary storage transfer job to plan more segments for transfer than necessary.

    • Handle inconsistencies in the global entities file gracefully rather than crashing.

    • Fixed a bug where merged segments could grow too large if the source events were large.

    • Major changes (see version 1.19.0 release notes)

    • Fixed a bug where fields panel was not scrollable in Safari.

Humio Server 1.20.1 LTS (2021-02-01)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.20.1LTS2021-02-01

Cloud

2022-01-31No1.16.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.20.0

Important Information about Upgrading

Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.20.1 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded at least to 1.16.0 before trying to upgrade to 1.20.1. In case you need to do a rollback, this can also only happen back to 1.16.0 or newer. Rolling directly back to an earlier release can result in data loss.

Fixed in this release

  • Other

    • Minor fix to Humio internal JSON logging when using the configuration HUMIO_LOG4J_CONFIGURATION=log4j2-json-stdout.xml.

    • Enable Package marketplace (in beta)

    • Segment download timeout raised from 120s to 1.0s. Avoids situations where large segments could not be moved around a cluster.

    • Fixed an issue causing the secondary storage transfer job to plan more segments for transfer than necessary.

    • Fixed a bug where merged segments could grow too large if the source events were large.

    • Major changes (see version 1.19.0 release notes)

Humio Server 1.20.0 LTS (2021-01-28)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.20.0LTS2021-01-28

Cloud

2022-01-31No1.16.0Yes

Hide file hashes

Show file hashes

Important Information about Upgrading

This release promotes the latest 1.19 release from preview to stable.

Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.20.0 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded at least to 1.16.0 before trying to upgrade to 1.20.0. In case you need to do a rollback, this can also only happen back to 1.16.0 or newer. Rolling directly back to an earlier release can result in data loss.

This version introduces Humio packages - a way of bundling and sharing assets such as dashboards and parsers. You can create your own packages to keep your Humio assets in Git or create utility packages that can be installed in multiple repositories. All assets can be serialized to YAML files (like what has been possible for dashboards for a while). With tight integration with Humio's CLI humioctl you can install packages from local disk, URL, or directly from a Github repository. Packages are still in beta, but we encourage you do start creating packages yourself, and sharing them with the community. At Humio we are also very interested in talking with package authors about getting your packages on our upcoming marketplace.

Read more about packages on our Packages.

With the introduction of Humio packages we have created the application Insights Package. The application is a collection of dashboards and saved searches making it possible to monitor and observe a Humio cluster.

The new query editor has a much better integration with Humio's query language. It will give you suggestions as you type, and gives you inline errors if you make a mistake. We will continue to improve the capabilities of the query editor to be aware of fields, saved queries, and other contextual information.

A new function called test() has been added for convenience. What used to be done like: tmp := expression | tmp=true can now be done using: test( expression ). Inside expression, field names appearing on the right hand side of an equality test, such as field1==field2 compares the values of the two fields. When comparing using = at top-level field1=field2 compares the value of field1 against the string "field2". This distinction is a cause of confusion for some users, and using test() simplifies that.

We have made small changes to how Humio logs internally. We did this to better support the new humio/insights. We have tried to keep the changes as small and compatible as possible, but we have made some changes that can break existing searches in the humio repository (or other repositories receiving Humio logs). We made these changes as we think they are important in order to improve things moving forward. One of the benefits is the new humio/insights. Read more about the details LogScale Internal Logging.

To see more details, go through the individual 1.19.x release notes.

Fixed in this release

  • Other

    • Enable Package marketplace (in beta)

    • Segment download timeout raised from 120s to 1.0s. Avoids situations where large segments could not be moved around a cluster.

    • Fixed an issue causing the secondary storage transfer job to plan more segments for transfer than necessary.

    • Fixed a bug where merged segments could grow too large if the source events were large.

    • Major changes (see version 1.19.0 release notes)

Humio Server 1.19.2 GA (2021-01-25)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.19.2GA2021-01-25

Cloud

2022-01-31No1.16.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Important Information about Upgrading

Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.19.2 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded to minimum 1.16.0 before trying to upgrade to 1.19.2. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer, rolling directly back to an earlier release can result in data loss.

Fixed in this release

  • Other

    • Fixed an issue for on-prem users not on multitenant setup by reverted a metric change introduced in 1.18.0, jmx and Slf4j included an OrgId in all metrics for repositories.

  • Packages

    • Fixed automatic installation of Humio insights package to the humio repository.

Humio Server 1.19.1 GA (2021-01-19)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.19.1GA2021-01-19

Cloud

2022-01-31No1.16.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Important Information about Upgrading

Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.19.1 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded to minimum 1.16.0 before trying to upgrade to 1.19.1. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer, rolling directly back to an earlier release can result in data loss.

Fixed in this release

  • Functions

    • Fixed bug where the format() function produced wrong output for some floating-point numbers.

  • Other

    • Fixed an issue - Do not delete datasource before the segments have been deleted also in bucket storage if present there.

    • Update dependencies with known vulnerabilities

    • Do not retry a query when getting a HTTP .0 error

    • Do not cache cancelled queries.

  • Packages

    • Fixed bug in a saved query in the Humio insights package.

Humio Server 1.19.0 GA (2021-01-14)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.19.0GA2021-01-14

Cloud

2022-01-31No1.16.0Yes

Available for download two days after release.

Hide file hashes

Show file hashes

Important Information about Upgrading

Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.19.0 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded at least to 1.16.0 before trying to upgrade to 1.19.0. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer. Rolling directly back to an earlier release can result in data loss.

Deprecation

Items that have been deprecated and may be removed in a future release.

New features and improvements

  • Other

    • Stateless Ingest-only nodes: A node that the rest of the cluster does not know exists, but is capable of ingesting events into the ingest queue. Enable using NODE_ROLES=ingestonly.

    • Custom ingest tokens making it possible for root users to create ingest tokens with a custom string.

Fixed in this release

  • Configuration

    • New config AUTO_UPDATE_MAXMIND for enabling/disabling updating of all maxmind databases. Deprecates AUTO_UPDATE_IP_LOCATION_DB, but old config will continue to work.

    • New config QUERY_QUOTA_EXCEEDED_PENALTY with value 50 by default. When set >= 1.0 then this throttles queries from users that are over their quota by this factor rather than stopping their queries. Set to 0 to disable and revert to stopping queries.

  • Functions

    • New function hash() for computing hashes of fields. See hash() reference page.

    • Fixed an issue with the cidr() function that would make some IPv4 subnets accept IPv6 addresses and some strings that were not valid IP addresses.

    • Make the query functions window() and series() be enabled by default. They can be disabled by setting the configuration options WINDOW_ENABLED and SERIES_ENABLED to false, respectively.

    • Added a new function for retrieving the ASN number for a given IP address, see asn() reference page.

    • Fixed an issue causing queries using kvParse() to be executed incorrectly in certain circumstances when kvParse() assigned fields starting with a non-alphanumeric character.

    • Fixed an issue where unit-conversion (by timechart) did not take effect through groupBy() and window().

    • Fixed an issue causing queries using kvParse() to filter out too much in specific circumstances when filtering on a field assigned before kvParse().

  • Other

    • New filter function test().

    • Removed config IDLE_POLL_TIME_BEFORE_DASHBOARD_QUERY_IS_CANCELLED_MINUTES. Queries on dashboards now have the same life cycle as other queries.

    • API Changes (Non-Documented API): getFileContent has been moved to a field on the SearchDomain type.

    • The built-in json-for-notifier parser used by the Humio Repository action (formerly notifier) is deprecated and will be removed in a later release. It has been replaced by an identical parser with the name json-for-action, see json-for-action.

    • Notifiers have been renamed to Actions throughout the UI and in log statements. The REST APIs have not been changed and all message templates can still be used.

    • New feature "Event forwarding" making it possible to forward events during ingest out of Humio to a Kafka server. See Event Forwarding documentation. Currently only available for on-prem customers.

    • When a host dies and Humio reassigns digest, it will warn if a fallback host is picked that is in the same zone as existing replicas. Eliminate warning if falling back to a host in the null zone.

    • Renamed LOG4J_CONFIGURATION environment variable to HUMIO_LOG4J_CONFIGURATION. See Configuration Settings.

    • Custom made saved queries, alerts and dashboards in the humio repository searching for events of the kinds metrics, requests or nonsensitive may need to be modified. This is described in more detail in LogScale Internal Logging.

    • Reduced the number of writes to global on restart, due to merge targets not being properly reused.

    • Raised the limit for note widget text length to .00

    • API Changes (Non-Documented API): Queries and Mutations for Parser now expects an id field in place of a name field, when fetching and updating parsers.

    • Improve handling of broken local cache files

    • The Humio Repository action (formerly notifier) now replaces a prefix '#' character in field names with @tag. so that #source becomes @tag.source. This is done to make them searchable in Humio. You can change the name by creating a custom parser. See Action Type: Falcon LogScale Repository.

    • Fixed bug where repeating queries would not validate in alerts.

    • Updated the permission checks when polling queries. This will results in dashboard links "created by users who are either deleted or lost permissions to the view" to get unauthorized. To list all dashboard links, run this graphql query as root:

      graphql
      query {
        searchDomains {
          dashboards {
            readOnlyTokens {
              createdBy
              name
              token
            }
          }
        }
      }
    • Fixed an rare issue where the digest coordinator would assign digest fewer hosts than configured.

    • The function parseCEF() now deals with extension fields with labels, i.e. cs1=Value cs1Label=Key becomes cef.label.Key=Value.

    • In the GraphQL API, the value ChangeAlertsAndNotifiers on the Permission enum has been deprecated and will be removed in a later release. It has been replaced by the ChangeTriggersAndActions value. The same is true for the ViewAction enum. On the ViewPermissionsType type, the administerAlertsfield has been deprecated and will be removed in a later release. It has been replaced by the administerTriggersAndActions field.

    • Fixed an issue where segment merge occasionally reported BrokenSegmentException when merging, while the segments where not broken.

    • Introduction of the new log file humio-requests.log. Also the log format for the files humio-metrics.log and humio-nonsensitive.log has changed as described above. See Log LogScale to LogScale.

    • Cluster management stats now shows segments as underreplicated if they are replicated to enough hosts, but are not present on all configured hosts.

    • unit on timechart (and bucket) now works also when the function within uses nesting and anonymous pipelines.

    • Fixed a bug where fullscreen mode could end up blank

    • Made cluster nodes log their own version as well as the versions of all other nodes. This makes it easier to tell which versions are running in the cluster.

    • API Changes (Non-Documented API): Getting Alert by ID has been moved to a field on the SearchDomain type.

    • Improved app loading logic.

    • The transfer job will delete primary copies shortly after transferring the segments to secondary storage. The copies would previously only be deleted once a full bulk had been moved.

    • New ingest endpoint /api/v1/ingest/raw for ingesting singular webcalls as events. See Ingest API - Raw Data documentation.

    • Fixed an issue where canceling queries could produce a spurious error log.

    • Raised the parser test character length to .00.

    • Fixed crash in CleanupDatasourceFilesJob when examining a file size fails due to that file being deleted concurrently.

    • Fixed timeout issue in S3 Archiving

    • Fixed an issue causing Humio to retain deleted mini-segments in global for longer than expected.

    • The configuration option HTTP_PROXY_ALLOW_NOTIFIERS_NOT_USE has been renamed to HTTP_PROXY_ALLOW_ACTIONS_NOT_USE. The old name will continue to work.

    • In the GraphQL API, on the Alert type, the notifiers field has been deprecated and will be removed in a later release. It has been replaced by the actions field.

    • The names of the metadata fields added by the Humio Repository action (formerly notifier) has been changed to accomodate that it can now also be used from scheduled searches. See Action Type: Falcon LogScale Repository.

    • The configuration option IP_FILTER_NOTIFIERS has been renamed to IP_FILTER_ACTIONS. The old name will continue to work.

    • New feature "Scheduled Searches" making it possible to run queries on a schedule and trigger actions (formerly notifiers) upon query results. See Scheduled Searches.

    • No longer overwrite the humio parser in the humio repository on startup.

    • Fixed an issue with updating user profile, in some situations save failed.

    • Fixed an issue that could cause node id assignment to fail when running on ephemeral disks and using ZooKeeper for node id assignment. Nodes in this configuration will now try to pick a new id if their old id has been acquired by another node.

    • New validation when creating an ingest token using the API that the parser, if specified, actually exists in the repository.

    • For ingest using a URL with a repository name in it, Humio now fails ingest if the repository in the URL does not match the repository of the ingest token. Previously, it would just use the repository of the ingest token.

    • The built-in bro-json parser is deprecated and will be removed in a later release. It has been replaced by an identical parser with the name zeek-json, see zeek-json.

    • Added config option for Auth0 based sign on method: AUTH_ALLOW_SIGNUP defaults to true. The config is forwarded to the auth0 configuration for the lock widget setting: allowSignUp

    • Fixed an issue causing the secondary storage transfer job to select and queue too many segments for transfer at once. The job will now stop and recalculate the bulk to transfer periodically.

    • Kafka client inside Humio has been bumped from 2.4.1 to 2.6.0.

    • Fixed an issue where the filter and groupBy buttons on the search page would not restart the search automatically

    • Fixed a rare issue where a node that was previously assigned digest could write a segment to global, even though it was no longer assigned the associated partition.

    • Fixed an issue where the segment rewrite job handling event deletion might rewrite segments sooner than configured.

    • Add an error message to the event if the user is trying to redirect it to another repo using #repo, and the target repo is invalid.

    • Fixed logic for when the organization owner panel should be shown in the User's Danger zone.

    • Upgraded Log4j2 from 2.13.3 to 2.14.0.

    • Added timeout for TCP ingest listeners. By default the connection is closed if no data is received after 5 minutes. This can be changed by setting TCP_INGEST_MAX_TIMEOUT_SECONDS. See Ingest Listeners.

    • Added mutation to update the runAsUser for a read only dashboard token.

    • Humio no longer deletes an existing humio-search-all view if the CREATE_HUMIO_SEARCH_ALL environment variable is false. The view instead becomes deleteable via the admin page.

    • Reduce contention on the query scheduler input queue. It was previously possible for large queries to prevent each other from starting, leading to timeouts.

    • Humio will only allow using ZooKeeper for node id assignment (ZOOKEEPER_URL_FOR_NODE_UUID) when configured for ephemeral disks (USING_EPHEMERAL_DISKS). When using persistent disks, there is no need for the extra complexity added by ZooKeeper.

    • Fixed an issue which caused free-text-search to not work correctly for large (>64KB) events.

  • Packages

    • Introduced humio insights package that is installed per default on startup on the humio repository.

Improvement

  • UI Changes

    • The new query editor has a much better integration with Humio's query language. It will give you suggestions as you type, and gives you inline errors if you make a mistake. We will continue to improve the capabilities of the query editor to be aware of fields, saved queries, and other contextual information.

  • Functions

    • A new function called test() has been added for convenience. What used to be executed like: tmp := ;expression | tmp=true can now be done using: test( <expression> ). Inside expression field names appearing on the right hand side of an equality test, such as field1==field2 compares the values of the two fields. When comparing using = at top-level field1=field2 compares the value of field1 against the string "field2". This distinction is a cause of confusion for some users, and using test() simplifies that.

  • Other

    • With the introduction of Humio packages we have created the Insights Package. The application is a collection of dashboards and saved searches making it possible to monitor and observe a Humio cluster.

    • We have made small changes to how Humio logs internally. We did this to better support the new humio/insights. We have tried to keep the changes as small and compatible as possible, but we have made some changes that can break existing searches in the humio repository (or other repositories receiving Humio logs). We made these changes as we think they are important in order to improve things moving forward.

      Read more about the details of LogScale Internal Logging.

  • Packages

    • This version introduces Humio packages - a way of bundling and sharing assets such as dashboards and parsers. You can create your own packages to keep your Humio assets in Git or create utility packages that can be installed in multiple repositories. All assets can be serialized to YAML files (like what has been possible for dashboards for a while). With tight integration with Humio's CLI humioctl you can install packages from local disk, URL, or directly from a Github repository. Packages are still in beta, but we encourage you do start creating packages yourself, and sharing them with the community. At Humio we are also very interested in talking with package authors about getting your packages on our upcoming marketplace.

      Read more about Packages.

Humio Server 1.18.4 LTS (2021-01-25)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.18.4LTS2021-01-25

Cloud

2021-11-30No1.16.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.18.0, 1.18.1, 1.18.2, 1.18.3

Important Information about Upgrading

Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.18.4 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded at least to 1.16.0 before trying to upgrade to 1.18.4. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer. Rolling directly back to an earlier release can result in data loss.

Fixed in this release

  • Automation and Alerts

    • Fixes a bug where some valid repeating queries would not validate in alerts.

  • Other

    • Changed behaviour when the config ZONE is set to the empty string. It is now considered the same as omitting ZONE.

    • Major changes (see 1.17.0 release notes)

    • Fixed a bug where TCP listener threads could take all resources from HTTP threads

    • Do not retry a query when getting a HTTP .0 error

    • Update dependencies with known vulnerabilities

    • Fixes a bug that would allow users with read access to be able to delete a file (#10133)

    • Improve handling of a node being missing from the cluster for a long time by letting other nodes handle the parts of the query that node would normally do.

    • Add non-sensitive logging that lists the versions of Humio running in the cluster. These logs can be found by searching the Humio debug log for "cluster_versions".

    • Improve performance of S3 archiving when many repositories have the feature enabled.

    • Resolves problem when starting a query spanning very large data sets, a time-out could prevent the browser from getting responses initially.

    • Adds a new configuration option for auth0: AUTH_ALLOW_SIGNUP. Default value is true.

    • Fixes a bug where top([a,b], sum=f) ignored events where f was not a positive integer. Now it ignores negative and non-numerical input but rounds decimal numbers to integer value.

    • Do not cache cancelled queries.

    • Removed config IDLE_POLL_TIME_BEFORE_DASHBOARD_QUERY_IS_CANCELLED_MINUTES. Queries on dashboards now have the same life cycle as other queries.

    • Improves handling when many transfers to secondary storage are pending.

    • Fixes a bug where the to parameter to unit:convert would cause internal server errors instead of validation errors.

    • Add GraphQL mutation to update the runAsUser for a read only dashboard token.

    • Fixes a bug where queries with @timestamp=x where x was a timestamp with the current search interval could fail

    • Fixes a bug where a query would not start automatically when requesting to filter or group by a value.

    • Fixes a bug where the merge of mini segments could fail during sampling of input for compression.

    • Fixes a bug where the permissions check on editing a connection from a view to a repository allowed altering the search prefix of connections other than the one the user currently was allowed to edit.

    • Fixed an issue for on-prem users not on multitenant setup by reverted a metric change introduced in 1.18.0, jmx and Slf4j included an OrgId in all metrics for repositories.

    • Fixed bug where the format() function produced wrong output for some floating-point numbers.

    • Increase number of vCPUs used when parsing TCP ingest, twice the number of the 1.18.0 build.

    • Fixed bug so as to reduce contention on the Query input queue.

    • Only install default Humio parser to the Humio view if it is missing. No longer overwriting local changes.

    • Fixed bug where Humio could end in a corrupted state, needing manual intervention before working again.

Humio Server 1.18.3 LTS (2021-01-20)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.18.3LTS2021-01-20

Cloud

2021-11-30No1.16.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.18.0, 1.18.1, 1.18.2

Important Information about Upgrading

Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.18.3 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded at least to 1.16.0 before trying to upgrade to 1.18.3. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer. Rolling directly back to an earlier release can result in data loss.

Fixed in this release

  • Automation and Alerts

    • Fixes a bug where some valid repeating queries would not validate in alerts.

  • Other

    • Changed behaviour when the config ZONE is set to the empty string. It is now considered the same as omitting ZONE.

    • Major changes (see 1.17.0 release notes)

    • Fixed a bug where TCP listener threads could take all resources from HTTP threads

    • Do not retry a query when getting a HTTP .0 error

    • Update dependencies with known vulnerabilities

    • Fixes a bug that would allow users with read access to be able to delete a file (#10133)

    • Improve handling of a node being missing from the cluster for a long time by letting other nodes handle the parts of the query that node would normally do.

    • Add non-sensitive logging that lists the versions of Humio running in the cluster. These logs can be found by searching the Humio debug log for "cluster_versions".

    • Improve performance of S3 archiving when many repositories have the feature enabled.

    • Resolves problem when starting a query spanning very large data sets, a time-out could prevent the browser from getting responses initially.

    • Adds a new configuration option for auth0: AUTH_ALLOW_SIGNUP. Default value is true.

    • Fixes a bug where top([a,b], sum=f) ignored events where f was not a positive integer. Now it ignores negative and non-numerical input but rounds decimal numbers to integer value.

    • Do not cache cancelled queries.

    • Removed config IDLE_POLL_TIME_BEFORE_DASHBOARD_QUERY_IS_CANCELLED_MINUTES. Queries on dashboards now have the same life cycle as other queries.

    • Improves handling when many transfers to secondary storage are pending.

    • Fixes a bug where the to parameter to unit:convert would cause internal server errors instead of validation errors.

    • Add GraphQL mutation to update the runAsUser for a read only dashboard token.

    • Fixes a bug where queries with @timestamp=x where x was a timestamp with the current search interval could fail

    • Fixes a bug where a query would not start automatically when requesting to filter or group by a value.

    • Fixes a bug where the merge of mini segments could fail during sampling of input for compression.

    • Fixes a bug where the permissions check on editing a connection from a view to a repository allowed altering the search prefix of connections other than the one the user currently was allowed to edit.

    • Fixed bug where the format() function produced wrong output for some floating-point numbers.

    • Increase number of vCPUs used when parsing TCP ingest, twice the number of the 1.18.0 build.

    • Fixed bug so as to reduce contention on the Query input queue.

    • Only install default Humio parser to the Humio view if it is missing. No longer overwriting local changes.

    • Fixed bug where Humio could end in a corrupted state, needing manual intervention before working again.

Humio Server 1.18.2 LTS (2021-01-08)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.18.2LTS2021-01-08

Cloud

2021-11-30No1.16.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.18.0, 1.18.1

Important Information about Upgrading

Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.18.2 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded at least to 1.16.0 before trying to upgrade to 1.18.2. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer. Rolling directly back to an earlier release can result in data loss.

Fixed in this release

  • Automation and Alerts

    • Fixes a bug where some valid repeating queries would not validate in alerts.

  • Other

    • Changed behaviour when the config ZONE is set to the empty string. It is now considered the same as omitting ZONE.

    • Major changes (see 1.17.0 release notes)

    • Fixed a bug where TCP listener threads could take all resources from HTTP threads

    • Fixes a bug that would allow users with read access to be able to delete a file (#10133)

    • Improve handling of a node being missing from the cluster for a long time by letting other nodes handle the parts of the query that node would normally do.

    • Add non-sensitive logging that lists the versions of Humio running in the cluster. These logs can be found by searching the Humio debug log for "cluster_versions".

    • Improve performance of S3 archiving when many repositories have the feature enabled.

    • Resolves problem when starting a query spanning very large data sets, a time-out could prevent the browser from getting responses initially.

    • Adds a new configuration option for auth0: AUTH_ALLOW_SIGNUP. Default value is true.

    • Fixes a bug where top([a,b], sum=f) ignored events where f was not a positive integer. Now it ignores negative and non-numerical input but rounds decimal numbers to integer value.

    • Removed config IDLE_POLL_TIME_BEFORE_DASHBOARD_QUERY_IS_CANCELLED_MINUTES. Queries on dashboards now have the same life cycle as other queries.

    • Improves handling when many transfers to secondary storage are pending.

    • Fixes a bug where the to parameter to unit:convert would cause internal server errors instead of validation errors.

    • Add GraphQL mutation to update the runAsUser for a read only dashboard token.

    • Fixes a bug where queries with @timestamp=x where x was a timestamp with the current search interval could fail

    • Fixes a bug where a query would not start automatically when requesting to filter or group by a value.

    • Fixes a bug where the merge of mini segments could fail during sampling of input for compression.

    • Fixes a bug where the permissions check on editing a connection from a view to a repository allowed altering the search prefix of connections other than the one the user currently was allowed to edit.

    • Increase number of vCPUs used when parsing TCP ingest, twice the number of the 1.18.0 build.

    • Fixed bug so as to reduce contention on the Query input queue.

    • Only install default Humio parser to the Humio view if it is missing. No longer overwriting local changes.

    • Fixed bug where Humio could end in a corrupted state, needing manual intervention before working again.

Humio Server 1.18.1 LTS (2020-12-17)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.18.1LTS2020-12-17

Cloud

2021-11-30No1.16.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.18.0

Important Information about Upgrading

Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.18.1 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded to minimum 1.16.0 before trying to upgrade to 1.18.1. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer, rolling directly back to an earlier release can result in data loss.

Fixed in this release

  • Automation and Alerts

    • Fixes a bug where some valid repeating queries would not validate in alerts.

  • Other

    • Changed behaviour when the config ZONE is set to the empty string. It is now considered the same as omitting ZONE.

    • Major changes (see 1.17.0 release notes)

    • Fixed a bug where TCP listener threads could take all resources from HTTP threads

    • Fixes a bug that would allow users with read access to be able to delete a file (#10133)

    • Improve handling of a node being missing from the cluster for a long time by letting other nodes handle the parts of the query that node would normally do.

    • Add non-sensitive logging that lists the versions of Humio running in the cluster. These logs can be found by searching the Humio debug log for "cluster_versions".

    • Improve performance of S3 archiving when many repositories have the feature enabled.

    • Fixes a bug where top([a,b], sum=f) ignored events where f was not a positive integer. Now it ignores negative and non-numerical input but rounds decimal numbers to integer value.

    • Removed config IDLE_POLL_TIME_BEFORE_DASHBOARD_QUERY_IS_CANCELLED_MINUTES. Queries on dashboards now have the same life cycle as other queries.

    • Fixes a bug where the to parameter to unit:convert would cause internal server errors instead of validation errors.

    • Add GraphQL mutation to update the runAsUser for a read only dashboard token.

    • Fixes a bug where queries with @timestamp=x where x was a timestamp with the current search interval could fail

    • Fixes a bug where a query would not start automatically when requesting to filter or group by a value.

    • Fixes a bug where the merge of mini segments could fail during sampling of input for compression.

    • Fixes a bug where the permissions check on editing a connection from a view to a repository allowed altering the search prefix of connections other than the one the user currently was allowed to edit.

    • Increase number of vCPUs used when parsing TCP ingest, twice the number of the 1.18.0 build.

    • Only install default Humio parser to the Humio view if it is missing. No longer overwriting local changes.

Humio Server 1.18.0 LTS (2020-11-26)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.18.0LTS2020-11-26

Cloud

2021-11-30No1.16.0Yes

Hide file hashes

Show file hashes

Important Information about Upgrading

This release promotes the latest 1.17 release from preview to stable.

Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.18.0 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded to minimum 1.16.0 before trying to upgrade to 1.18.0. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer, rolling directly back to an earlier release can result in data loss.

Humio can now run repeating queries using the beta:repeating() function. These are live queries that are implemented by repeatedly making a query. This allows using functions in alerts and dashboards that typically do not work in live queries, such as selfJoin() or selfJoinFilter(). See the beta:repeating() reference page for more information.

In order to prevent alert notifiers being used to probe services on the internal network (eg. ZooKeeper or the AWS metadata service), Humio now has an IP filter on alert notifiers. The default is to block access to all link-local addresses and any addresses on the internal network; however, you can opt-in to the old behavior by setting the configuration option IP_FILTER_NOTIFIERS to allow all. See IP Filter documentation.

New experimental query function series()

A new experimental query function called series() has been added. It needs to be explicitly enabled on the cluster using the configuration option SERIES_ENABLED=true.

The function series() improves upon session() and collect() for grouping events into transactions. What used to be done with:

logscale Syntax
groupby(id, function=session(function=collect([fields, ...])))

can now be done using:

logscale Syntax
groupby(id, function=series([fields, ...]))

See series() reference page for more details.

This new feature stores a copy of live search results to the local disk in the server nodes, and reuses the relevant parts of that cached result when an identical live search is later started. Caching is controlled with the config option QUERY_CACHE_MIN_COST, which has a default value of .0. To disable caching, set the config option to a very high number, such as 9223372036854775807 (max long value).

To see more details, go through the individual 1.17.x release notes (links in the changelog).

Fixed in this release

  • Other

    • Changed behaviour when the config ZONE is set to the empty string. It is now considered the same as omitting ZONE.

    • Major changes (see 1.17.0 release notes)

    • Fixed a bug where TCP listener threads could take all resources from HTTP threads

    • Removed config IDLE_POLL_TIME_BEFORE_DASHBOARD_QUERY_IS_CANCELLED_MINUTES. Queries on dashboards now have the same life cycle as other queries.

Humio Server 1.17.0 GA (2020-11-18)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.17.0GA2020-11-18

Cloud

2021-11-30No1.16.0Yes

Available for download two days after release.

Hide file hashes

Show file hashes

Important Information about Upgrading

Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.17.0 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded to minimum 1.16.0 before trying to upgrade to 1.17.0. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer, rolling directly back to earlier release can result in data loss.

Humio can now run repeating queries using the beta:repeating() function. These are live queries that are implemented by repeatedly making a query. This allows using functions in alerts and dashboards that typically do not work in live queries, such as selfJoin() or selfJoinFilter(). See the beta:repeating() reference page for more information.

In order to prevent alert notifiers being used to probe services on the internal network (eg. ZooKeeper or the AWS metadata service), Humio now has an IP filter on alert notifiers. The default is to block access to all link-local addresses and any addresses on the internal network; however, you can opt-in to the old behavior by setting the configuration option IP_FILTER_NOTIFIERS to allow all. See IP Filter documentation.

A new experimental query function called series() has been added. It needs to be explicitly enabled on the cluster using the config option SERIES_ENABLED set to true.

The function series() improves upon session() and collect() for grouping events into transactions. What used to be executed with:

logscale Syntax
groupby(id,function=session(function=collect([fields, ...])))

Can now be executed using:

logscale Syntax
groupby(id, function=series([fields, ...]))

See series() reference page for more details.

This new feature stores a copy of live search results to the local disk in the server nodes, and reuses the relevant parts of that cached result when an identical live search is later started. Caching is controlled with the config option QUERY_CACHE_MIN_COST, which has a default value of .0. To disable caching, set the config option to a very high number, such as 9223372036854775807 (max long value).

New features and improvements

Fixed in this release

  • UI Changes

    • Setting the default query for a view in the UI has been moved from the "Save as Query" to the View's "Settings" tab.

  • Automation and Alerts

    • The notifier list is sorted when selecting notifiers for an alert.

  • Configuration

    • New configuration option ALERT_DESPITE_WARNINGS makes it possible to trigger alerts even when warnings occur.

    • New configuration option IP_FILTER_NOTIFIERS to set up IP filters for Alert Notifications, see IP Filter reference page.

    • New configuration option DEFAULT_MAX_NUMBER_OF_GLOBALDATA_DUMPS_TO_KEEP.

    • New configuration option ENABLE_ALERTS makes it possible to disable alerts from running (enabled by default).

  • Functions

    • New experimental query function, see beta:repeating() reference page.

    • Fixes a bug causing the sub-queries of join() etc. to not see events with an @ingesttimestamp occurring later than the search time interval.

    • New experimental query function window(), enabled by configuration option WINDOW_ENABLED=true, see window() reference page.

    • Fixes a bug causing join() to not work after an aggregating function.

    • Fixes a bug where join() function in some circumstances would fetch subquery results from other cluster nodes more than once.

    • Fixes a bug causing sort(), head(), tail() to work incorrectly after other aggregating functions.

    • New experimental query function series(), enabled by configuration option SERIES_ENABLED=true, see series() reference page.

    • New query function used to parse events which are formatted according to the Common Event Format (CEF), see parseCEF() documentation page.

  • Other

    • Reduce the max fetch size for Kafka requests, as the previous size would sometimes lead to request timeouts.

    • API Changes (Non-Documented API): Saved Query REST API has been replaced by GraphQL.

    • Fixes the issue where Humio could behave incompatibly with Kafka versions prior to 2.3.0 if KAFKA_MANAGED_BY_HUMIO was true.

    • Fixes an issue causing Humio to fail to upload files to bucket storage in rare cases.

    • Crash the node if an exception occurs while reading from the global Kafka topic, rather than trying to recover.

    • API Changes (Non-Documented API): View Settings REST API has been replaced by GraphQL.

    • The Humio-search-all view will no longer be removed if CREATE_HUMIO_SEARCH_ALL is set to false. The view will instead become possible to delete manually via the admin UI.

    • Refuse to boot if the global topic in Kafka does not contain the expected starting offset.

    • Periodically release object pools used by mapper pipeline, to avoid a possible source of memory leaks.

    • Tweaked location of diagnostics regarding missing function arguments.

    • Fixes an issue where Humio might try to get admin access to Kafka when KAFKA_MANAGED_BY_HUMIO was false.

    • It is again possible to override a built-in parser in a repository by creating a parser with the same name.

    • Fix negating join expressions.

    • Changed default TLS ciphers and protocols accepted by Humio, see TLS.

    • Fix several cases where Humio might attempt to write a message to Kafka larger than what Kafka will allow.

    • Fixes the case where datasources receiving data might not be marked idle, causing Humio to retain too much ingest data in Kafka.

    • Fixes an issue which caused free-text-search to not work correctly for large (>64KB) events.

    • Switch from JDK to BouncyCastle provider for AES decrypt to reduce memory usage.

    • Allow running Humio on JDK-14 and JDK-15 to allow testing these new builds.

    • Rename a few scheduler threads so they reflect whether they're associated with streaming queries ("streaming-scheduler") or not ("normal-scheduler")

    • The {events_html} notifier template will now respect the field order from the query.

    • Improve logic attempting to ensure other live nodes can act as substitutes in case the preferred digest nodes are not available when writing new segments.

    • Reduce the number of merge target updates Humio will write to global on digest leader reassignment or reboot.

    • Free-text search has been fixed to behave more in line with the specification.

    • Improved wording of diagnostics regarding function arguments.

    • If KAFKA_MANAGED_BY_HUMIO is true, Humio will ensure unclean leader election is disabled on the global-events topic.

    • Fixes a bug where unit:convert couldn't handle numbers in scientific notation.

    • Fixes the case where Humio would consider local node state when deciding which ingest data was safe to delete from Kafka.

    • Refuse to boot if the booting node would cause violations of the "Minimum previous Humio version" as listed in the release notes.

Humio Server 1.16.4 LTS (2020-11-26)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.16.4LTS2020-11-26

Cloud

2021-10-31No1.12.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.16.0, 1.16.1, 1.16.2, 1.16.3

Many bug fixes: related to join( ), TCP listerner threads, etc.

Fixed in this release

  • Summary

    • Avoid logging the license key.

    • Fixed an issue when starting a query, where resources related to HTTP requests were not released in a timely manner, causing an error log when the resources were released by hitting a timeout.

    • Fixed an issue where errors were not properly shown in the Humio UI.

    • Fixed an issue where it was impossible to bootstrap a new cluster if ingest or storage replication factors had been configured greater than 1.

    • Returning bad request when hitting authentication endpoint without a provider id.

    • Improved the performance for GroupBy().

    • Ensure metric label names can be sent to Prometheus.

    • Fixed an issue where RegEx field extraction did not work in a query.

    • HTML sanitization for user fields in invitation mails.

    • Switched from JDK to BouncyCastle provider for AES decrypt to reduce memory usage.

    • Fix negating join expressions.

    • Optimize how certain delete operations in the global database are performed to improve performance in large clusters.

    • Fixed an issue where sorting of work in the Humio input could end up being wrong.

    • Convert some non-fatal logs to warning level instead of error.

    • Add query parameter sanitization for login and signup pages.

    • Fixed an issue with truncating files on the XFS file system, leading to excess data usage.

    • Fixed an issue preventing the metric datasource-count from counting datasources correctly.

    • Fixed a bug where TCP listener threads could take all resources from HTTP threads.

    • Prevent automatic URL to link conversion in email clients.

    • Raise time to wait until deleting data to improve handling of node failures.

    • Added new metric jvm-hiccup for measuring stalls/pauses in the JVM.

    • Log information about sorting of snapshots.

    • Fixed an issue causing Humio to fail to upload files to bucket storage in rare cases.

    • Fixed an issue which caused free-text-search to not work correctly for large (>64KB) events.

  • Automation and Alerts

    • Fixed an issue where missing input validation in alerts could lead to HTML injection in email notifications.

    • Fixed a bug where the {events_html} message template was formatted as raw HTML in alert emails.

    • Add view to log lines for alerts

  • Functions

    • Fixed a bug causing sort(), head(), tail() to work incorrectly after other aggregating functions.

  • Other

    • Bulk Global operations for segments in S3 to avoid overloading Kafka with writes.

    • Log Humio cluster version in non-sensitive log.

    • Fixed a problem where some deleted segments could show up as missing.

    • Added metrics for:

      • JVM Garbage Collection

      • JVM Memory usage

      • Missing nodes count

    • Fixed a problem where errors would not be shown in the UI

    • Major changes: (see 1.15.0 release notes)

    • Other changes: (see 1.15.2 release notes)

    • Fixed an issue where cleanup of empty datasource directories could race with other parts of the system and cause issues.

    • Fixed a problem with auto sharding not working when two repositories had the same tags but differing shard counts.

    • Fixed an issue where Humio could behave incompatibly with Kafka versions prior to 2.3.0 if KAFKA_MANAGED_BY_HUMIO was true.

    • Fixed a problem where the Zone configuration would not be propagated correctly.

    • Fixed a problem where the QueryScheduler could spend time idling even though there was work to do in situations where digest delays were high.

    • Reduce memory usage when using the match() or regex() query functions.

    • Fixed a bug causing the sub-queries of join() etc. not to see events with an @ingesttimestamp occurring later than the search time interval.

    • Support for license files in ES512 format.

    • Log query total cost when logging query information.

    • Improved merging of segments by evaluating less data.

    • Changed limits for what can be fetched via HTTP from inside Humio.

    • Other changes: (see 1.15.1 release notes)

    • Fixed several cases where Humio might attempt to write a larger message to Kafka than what Kafka allows.

    • Fixed a problem preventing saved queries from being edited.

    • Added background job to fix problems with inconsistent data in global.

    • Fixed a problem preventing file export/download from the search page.

    • Fixed a problem where it was not possible to rename a dashboard.

    • Fixed missing cache update when deleting a view.

    • Fixed a problem with the retention job calculating what segments to delete.

Humio Server 1.16.3 LTS (2020-11-10)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.16.3LTS2020-11-10

Cloud

2021-10-31No1.12.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.16.0, 1.16.1, 1.16.2

Improved memory usage of some query functions, and fixes problems with datasource cleanup, resource usage of HTTP requests, and with large free-text searches.

Fixed in this release

  • Summary

    • Avoid logging the license key.

    • Fixed an issue when starting a query, where resources related to HTTP requests were not released in a timely manner, causing an error log when the resources were released by hitting a timeout.

    • Fixed an issue where errors were not properly shown in the Humio UI.

    • Fixed an issue where it was impossible to bootstrap a new cluster if ingest or storage replication factors had been configured greater than 1.

    • Returning bad request when hitting authentication endpoint without a provider id.

    • Improved the performance for GroupBy().

    • Ensure metric label names can be sent to Prometheus.

    • Fixed an issue where RegEx field extraction did not work in a query.

    • HTML sanitization for user fields in invitation mails.

    • Optimize how certain delete operations in the global database are performed to improve performance in large clusters.

    • Fixed an issue where sorting of work in the Humio input could end up being wrong.

    • Convert some non-fatal logs to warning level instead of error.

    • Add query parameter sanitization for login and signup pages.

    • Fixed an issue with truncating files on the XFS file system, leading to excess data usage.

    • Prevent automatic URL to link conversion in email clients.

    • Raise time to wait until deleting data to improve handling of node failures.

    • Added new metric jvm-hiccup for measuring stalls/pauses in the JVM.

    • Log information about sorting of snapshots.

    • Fixed an issue which caused free-text-search to not work correctly for large (>64KB) events.

  • Automation and Alerts

    • Fixed an issue where missing input validation in alerts could lead to HTML injection in email notifications.

    • Add view to log lines for alerts

  • Other

    • Bulk Global operations for segments in S3 to avoid overloading Kafka with writes.

    • Log Humio cluster version in non-sensitive log.

    • Fixed a problem where some deleted segments could show up as missing.

    • Added metrics for:

      • JVM Garbage Collection

      • JVM Memory usage

      • Missing nodes count

    • Fixed a problem where errors would not be shown in the UI

    • Major changes: (see 1.15.0 release notes)

    • Other changes: (see 1.15.2 release notes)

    • Fixed an issue where cleanup of empty datasource directories could race with other parts of the system and cause issues.

    • Fixed a problem with auto sharding not working when two repositories had the same tags but differing shard counts.

    • Fixed an issue where Humio could behave incompatibly with Kafka versions prior to 2.3.0 if KAFKA_MANAGED_BY_HUMIO was true.

    • Fixed a problem where the Zone configuration would not be propagated correctly.

    • Fixed a problem where the QueryScheduler could spend time idling even though there was work to do in situations where digest delays were high.

    • Reduce memory usage when using the match() or regex() query functions.

    • Support for license files in ES512 format.

    • Log query total cost when logging query information.

    • Improved merging of segments by evaluating less data.

    • Changed limits for what can be fetched via HTTP from inside Humio.

    • Other changes: (see 1.15.1 release notes)

    • Fixed several cases where Humio might attempt to write a larger message to Kafka than what Kafka allows.

    • Fixed a problem preventing saved queries from being edited.

    • Added background job to fix problems with inconsistent data in global.

    • Fixed a problem preventing file export/download from the search page.

    • Fixed a problem where it was not possible to rename a dashboard.

    • Fixed missing cache update when deleting a view.

    • Fixed a problem with the retention job calculating what segments to delete.

Humio Server 1.16.2 LTS (2020-10-30)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.16.2LTS2020-10-30

Cloud

2021-10-31No1.12.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.16.0, 1.16.1

Improved delete options in large clusters, and fix problems with generating http links.

Fixed in this release

  • Summary

    • Avoid logging the license key.

    • Fixed an issue where errors were not properly shown in the Humio UI.

    • Fixed an issue where it was impossible to bootstrap a new cluster if ingest or storage replication factors had been configured greater than 1.

    • Returning bad request when hitting authentication endpoint without a provider id.

    • Improved the performance for GroupBy().

    • Ensure metric label names can be sent to Prometheus.

    • Fixed an issue where RegEx field extraction did not work in a query.

    • HTML sanitization for user fields in invitation mails.

    • Optimize how certain delete operations in the global database are performed to improve performance in large clusters.

    • Fixed an issue where sorting of work in the Humio input could end up being wrong.

    • Convert some non-fatal logs to warning level instead of error.

    • Add query parameter sanitization for login and signup pages.

    • Fixed an issue with truncating files on the XFS file system, leading to excess data usage.

    • Prevent automatic URL to link conversion in email clients.

    • Raise time to wait until deleting data to improve handling of node failures.

    • Added new metric jvm-hiccup for measuring stalls/pauses in the JVM.

    • Log information about sorting of snapshots.

  • Automation and Alerts

    • Fixed an issue where missing input validation in alerts could lead to HTML injection in email notifications.

    • Add view to log lines for alerts

  • Other

    • Bulk Global operations for segments in S3 to avoid overloading Kafka with writes.

    • Log Humio cluster version in non-sensitive log.

    • Fixed a problem where some deleted segments could show up as missing.

    • Added metrics for:

      • JVM Garbage Collection

      • JVM Memory usage

      • Missing nodes count

    • Fixed a problem where errors would not be shown in the UI

    • Major changes: (see 1.15.0 release notes)

    • Other changes: (see 1.15.2 release notes)

    • Fixed a problem with auto sharding not working when two repositories had the same tags but differing shard counts.

    • Fixed an issue where Humio could behave incompatibly with Kafka versions prior to 2.3.0 if KAFKA_MANAGED_BY_HUMIO was true.

    • Fixed a problem where the Zone configuration would not be propagated correctly.

    • Fixed a problem where the QueryScheduler could spend time idling even though there was work to do in situations where digest delays were high.

    • Support for license files in ES512 format.

    • Log query total cost when logging query information.

    • Improved merging of segments by evaluating less data.

    • Changed limits for what can be fetched via HTTP from inside Humio.

    • Other changes: (see 1.15.1 release notes)

    • Fixed several cases where Humio might attempt to write a larger message to Kafka than what Kafka allows.

    • Fixed a problem preventing saved queries from being edited.

    • Added background job to fix problems with inconsistent data in global.

    • Fixed a problem preventing file export/download from the search page.

    • Fixed a problem where it was not possible to rename a dashboard.

    • Fixed missing cache update when deleting a view.

    • Fixed a problem with the retention job calculating what segments to delete.

Humio Server 1.16.1 LTS (2020-10-21)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.16.1LTS2020-10-21

Cloud

2021-10-31No1.12.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.16.0

Several bug fixes related to Humio UI, Prometheus, Clusters, RegEx queries, etc. — as well as improved GroupBy(), jvm-hiccup, etc.

Fixed in this release

  • Summary

    • Avoid logging the license key.

    • Fixed an issue where errors were not properly shown in the Humio UI.

    • Fixed an issue where it was impossible to bootstrap a new cluster if ingest or storage replication factors had been configured greater than 1.

    • Returning bad request when hitting authentication endpoint without a provider id.

    • Improved the performance for GroupBy().

    • Ensure metric label names can be sent to Prometheus.

    • Fixed an issue where RegEx field extraction did not work in a query.

    • HTML sanitization for user fields in invitation mails.

    • Fixed an issue where sorting of work in the Humio input could end up being wrong.

    • Convert some non-fatal logs to warning level instead of error.

    • Add query parameter sanitization for login and signup pages.

    • Fixed an issue with truncating files on the XFS file system, leading to excess data usage.

    • Raise time to wait until deleting data to improve handling of node failures.

    • Added new metric jvm-hiccup for measuring stalls/pauses in the JVM.

    • Log information about sorting of snapshots.

  • Automation and Alerts

    • Fixed an issue where missing input validation in alerts could lead to HTML injection in email notifications.

    • Add view to log lines for alerts

  • Other

    • Bulk Global operations for segments in S3 to avoid overloading Kafka with writes.

    • Log Humio cluster version in non-sensitive log.

    • Fixed a problem where some deleted segments could show up as missing.

    • Added metrics for:

      • JVM Garbage Collection

      • JVM Memory usage

      • Missing nodes count

    • Fixed a problem where errors would not be shown in the UI

    • Major changes: (see 1.15.0 release notes)

    • Other changes: (see 1.15.2 release notes)

    • Fixed a problem with auto sharding not working when two repositories had the same tags but differing shard counts.

    • Fixed an issue where Humio could behave incompatibly with Kafka versions prior to 2.3.0 if KAFKA_MANAGED_BY_HUMIO was true.

    • Fixed a problem where the Zone configuration would not be propagated correctly.

    • Fixed a problem where the QueryScheduler could spend time idling even though there was work to do in situations where digest delays were high.

    • Support for license files in ES512 format.

    • Log query total cost when logging query information.

    • Improved merging of segments by evaluating less data.

    • Changed limits for what can be fetched via HTTP from inside Humio.

    • Other changes: (see 1.15.1 release notes)

    • Fixed several cases where Humio might attempt to write a larger message to Kafka than what Kafka allows.

    • Fixed a problem preventing saved queries from being edited.

    • Added background job to fix problems with inconsistent data in global.

    • Fixed a problem preventing file export/download from the search page.

    • Fixed a problem where it was not possible to rename a dashboard.

    • Fixed missing cache update when deleting a view.

    • Fixed a problem with the retention job calculating what segments to delete.

Humio Server 1.16.0 LTS (2020-10-09)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.16.0LTS2020-10-09

Cloud

2021-10-31No1.12.0No

Hide file hashes

Show file hashes

This release promotes the latest 1.15 release from preview to stable. To see more details, go through the individual 1.15.x release notes (links in the changelog).

Humio will set ingest timestamps on all events. This is set in the field named @ingesttimestamp. In later versions, Humio will also support specifying the search time interval using @ingesttimestamp when searching. This will support use cases where data is backfilled etc.

Field based throttling: It is now possible to make an alert throttle based on a field, so that new values for the field trigger the alert, but already seen values do not until the throttle period has elapsed.

Notifier logging to a Humio repository: It is now possible to configure an alert notifier that will log all events to a Humio repository.

Slack notifier upgrade to notify multiple Slack channels: It is now possible to use the Slack notifier to notify multiple Slack channels at once.

Events as HTML table: In an email notifier, it is now possible to format the events as an HTML table using the new message template {events_html}. Currently, the order of the columns is not well-defined. This problem will be fixed in the 1.17.0 release.

Configure notifier to not use the internet proxy: It is now possible to configure an alert notifier to not use the HTTP proxy configured in Humio.

Redesigned signup and login pages. For cloud, we have have split the behavior so users have to explicitly either login or signup.

Invite flow: When adding a user to Humio they will now by default get an email telling them that they have been invited to use Humio.

Configure Humio to not use the internet proxy for S3: It is now possible to configure Humio to not use the globally configured HTTP proxy for communcation with S3.

Auto-Balanced Partition Table Suggestions

When changing digest and storage partitions it is now possible to get auto-balanced suggestions based on node zone and replication factor settings (via ZONE, DIGEST_REPLICATION_FACTOR and STORAGE_REPLICATION_FACTOR configurations). See Configuration Settings.

The AWS SDK Humio uses has been upgraded to v2. When configuring Humio bucket storage with Java system properties, the access key must now be in the aws.secretAccessKey property instead of the aws.secretKey property.

Fixed in this release

  • Automation and Alerts

    • Add view to log lines for alerts

  • Other

    • Bulk Global operations for segments in S3 to avoid overloading Kafka with writes.

    • Log Humio cluster version in non-sensitive log.

    • Fixed a problem where some deleted segments could show up as missing.

    • Added metrics for:

      • JVM Garbage Collection

      • JVM Memory usage

      • Missing nodes count

    • Fixed a problem where errors would not be shown in the UI

    • Major changes: (see 1.15.0 release notes)

    • Other changes: (see 1.15.2 release notes)

    • Fixed a problem with auto sharding not working when two repositories had the same tags but differing shard counts.

    • Fixed an issue where Humio could behave incompatibly with Kafka versions prior to 2.3.0 if KAFKA_MANAGED_BY_HUMIO was true.

    • Fixed a problem where the Zone configuration would not be propagated correctly.

    • Fixed a problem where the QueryScheduler could spend time idling even though there was work to do in situations where digest delays were high.

    • Support for license files in ES512 format.

    • Log query total cost when logging query information.

    • Improved merging of segments by evaluating less data.

    • Changed limits for what can be fetched via HTTP from inside Humio.

    • Other changes: (see 1.15.1 release notes)

    • Fixed several cases where Humio might attempt to write a larger message to Kafka than what Kafka allows.

    • Fixed a problem preventing saved queries from being edited.

    • Added background job to fix problems with inconsistent data in global.

    • Fixed a problem preventing file export/download from the search page.

    • Fixed a problem where it was not possible to rename a dashboard.

    • Fixed missing cache update when deleting a view.

    • Fixed a problem with the retention job calculating what segments to delete.

Humio Server 1.15.2 GA (2020-09-29)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.15.2GA2020-09-29

Cloud

2021-10-31No1.12.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Many bug fixes, including fixes related to login from Safari and Firefox, the join() function.

Fixed in this release

  • Summary

    • Fixed a problem with scrolling on the login page on screens with low resolution.

    • Fixed a bug causing an authentication error when trying to download a file when authenticating by proxy.

    • Fixed an issue showing duplicate entries when searching for users.

    • Generate ingest tokens in UUID format, replacing the current format for any new tokens being created.

    • Changed priorities when fetching segments to a node which have been offline for a longer period. This avoids waiting too long before the cluster becomes fully synced.

    • Fixed an issue where a slow data stream could cause Humio to retain more data in Kafka than necessary, as well as cause a restarted Humio node to reprocess too much data.

    • Only consider fully replicated data when calculating which offsets can be pruned from Kafka.

    • Improved naming of threads to get more usable thread dumps.

    • Made the login and sign up pages responsive to the device.

    • Fixed a memory leak when authenticating in AWS setups.

    • Added logging to detect issues when truncating finished files.

    • Fixed a bug in the partition table optimizer that lead to unbalanced layouts.

    • Avoid overloading kafka with updates for the global database by collecting operations in bulk.

    • Improved handling of sub-queries polling state from the main query when using join().

    • Fixed a problem where the login link did not work in Safari and Firefox.

    • Changed the query scheduling to account for the work of the overall query, rather than per job started. This allows fairer scheduling of queries hitting many dataspaces e.g. when using search-all.

    • In the dialog for saving a search as an alert, the save button is no longer always grey and boring, but can actually save alerts again.

Humio Server 1.15.1 GA (2020-09-22)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.15.1GA2020-09-22

Cloud

2021-10-31No1.12.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Fixes bugs related to AWS and STS tokens, timestamp display results, reverting Humio UI login method.

Fixed in this release

  • Summary

    • Revert login Humio User Interface to same behavior as before version 1.15.0.

    • Fixed a problem in the UI, where the wrong timestamp was displayed as @ingesttimestamp.

    • The job for updating the IP location database now uses the configured HTTP proxy, if present.

    • Fixed a problem with AWS, where STS tokens would fail to authenticate.

Humio Server 1.15.0 GA (2020-09-15)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.15.0GA2020-09-15

Cloud

2021-10-31No1.12.0Yes

Available for download two days after release.

Hide file hashes

Show file hashes

Improves ingest timestamps, field based throttling, the ability to configure better alert notifiers, Slack notifiers, etc.

Humio will set ingest timestamps on all events. This is set in the field named @ingesttimestamp. In later versions, Humio will also support specifying the search time interval using @ingesttimestamp when searching. This will support use cases where data is backfilled etc.

It is now possible to make an alert throttle based on a field, so that new values for the field trigger the alert, but already seen values do not until the throttle period has elapsed.

Notifier Logging to Humio Repository

It is now possible to configure an alert notifier that will log all events to a Humio repository.

It is now possible to use the Slack notifier to notify multiple slack channels at once.

In an email notifier, it is now possible to format the events as an HTML table using the new message template {events_html}.

Configure Notifier Not to use Internet Proxy

It is now possible to configure an alert notifier to not use the HTTP proxy configured in Humio.

We introduce new signup/login pages for social login and have split the behavior so users have to explicitly either login or signup.

When adding a user to Humio they will now by default get an email telling them that they have been invited to use Humio.

The AWS SDK Humio uses has been upgraded to v2. When configuring Humio bucket storage with Java system properties, the access key must now be in the aws.secretAccessKey property instead of the aws.secretKey property.

Configure Humio Not to use Internet Proxy for S3

It is now possible to configure Humio to not use the globally configured HTTP proxy for communication with S3.

Auto-Balanced Partition Table Suggestions

When changing digest and storage partitions it is now possible to get auto-balanced suggestions based on node zone and replication factor settings (via ZONE, DIGEST_REPLICATION_FACTOR, STORAGE_REPLICATION_FACTOR configurations). See Configuration Settings.

Fixed in this release

  • Automation and Alerts

    • Alert notifiers can be configured to not use an HTTP proxy.

    • Field based throttling on alerts.

    • New alert notifier template {events_html} formatting events as an HTML table.

  • Other

    • S3 communication can be configured to not use an HTTP proxy.

    • Humio will set the field @ingesttimestamp on all events.

    • If automatically creating users upon login and syncing their groups from the authentication mechanims, the configuration ONLY_CREATE_USER_IF_SYNCED_GROUPS_HAVE_ACCESS now controls whether users should only be created if the synced groups have access to a repository or view. The default is false.

    • Upgraded to AWS SDK v2. When using Java system properties for configuring Humio bucket storage, use aws.secretAccessKey instead of aws.secretKey.

    • Newly added users will by default get an email.

    • New alert notifier type logging to a Humio repository.

    • Auto-balanced partition table suggestions. See ZONE, DIGEST_REPLICATION_FACTOR, STORAGE_REPLICATION_FACTOR in configuration. See Configuration Settings.

    • Improved error handling when a parser cannot be loaded. Before, this resulted in Humio returning an error to the log shipper. Now, data is ingested without being parsed, but marked with an error as described in Parser Errors.

    • CSV files can no longer contain unnamed columns and also trailing commas are disallowed. Queries based on such files will now fail with an error.

    • New explicit signup and login pages for social login.

Humio Server 1.14.6 LTS (2020-10-30)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.14.6LTS2020-10-30

Cloud

2021-08-31No1.12.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.14.0, 1.14.1, 1.14.2, 1.14.3, 1.14.4, 1.14.5

Email Notification Improvements

Fixed in this release

  • Summary

    • Fixed a problem where too many segments could be generated when restarting nodes.

    • Fixed an issue where Humio could behave incompatibly with Kafka versions prior to 2.3.0 if KAFKA_MANAGED_BY_HUMIO was true.

    • Fix missing cache update when deleting a view.

    • Changed limits for what can be fetched via HTTP from inside Humio.

    • Changed the query scheduling to account for the work of the overall query, rather than per job started. This allows fairer scheduling of queries hitting many dataspaces e.g. when using search-all.

    • Improve naming of threads to get more usable thread dumps.

    • Fixed a race condition when cleaning up datasources.

    • Log Humio cluster version in non-sensitive log.

    • The job for updating the IP location database now uses the configured HTTP proxy, if present.

    • Add logging to detect issues when truncating finished files.

    • New metrics for scheduling of queries:

      • local-query-jobs-wait: Histogram of time in milliseconds that each query waited between getting any work done including exports

      • local-query-jobs-queue: Count queries currently queued or active on node including exports

      • local-query-segments-queue-exports-part: Count of elements in queue as number of segments currently queued for query for exports

      • local-query-jobs-queue-exports-part: Count queries currently queued or active on node for exports

    • Improve performance when processing streaming queries.

    • Added log rotation for humio-non-sensitive logs.

    • Change priorities when fetching segments to a node which have been offline for a longer period. This avoids waiting too long before the cluster becomes fully synced.

    • Include user email in metrics when queries end.

    • Fixed a problem where some deleted segments could show up as missing.

    • Fixed an issue where Humio might attempt to write a larger message to Kafka than what Kafka allows.

    • Remove restriction on expire time when creating emergency user through the emergency user API. See Enabling Emergency Access.

    • Remove restriction on length of group names from LDAP.

    • Fixed an issue where a slow data stream could cause Humio to retain more data in Kafka than necessary, as well as cause a restarted Humio node to reprocess too much data.

    • Fixed a problem where duplicated uploaded files would not be deleted from /tmp.

    • Improved handling of data replication when nodes are offline.

    • Avoid overloading Kafka with updates for the global database by collecting operations in bulk.

    • Improve handling of sub-queries polling state from the main query when using join().

    • Added new metric jvm-hiccup for measuring stalls/pauses in the JVM.

    • Fixed a problem where segments could be downloaded to stateless frontend nodes from Bucket storage.

    • Fixed an issue where missing input validation in alerts could lead to HTML injection in email notifications.

    • Prevent automatic url to link conversion in email clients.

    • Fixed several cases where Humio might attempt to write a larger message to Kafka than Kafka allows.

    • HEC endpoint is now strictly validated as documented for top-level fields, which means non-valid input will be rejected. See Ingesting with HTTP Event Collector (HEC).

  • Configuration

    • Improved handling of query restarts to avoid unnecessary restarts in some scenarios.

    • Handling of digest in the case where a node has been offline for a long time has been improved. As an example, running a Humio cluster with a replication factor of 2 and having one node go offline for a long time would leave some ingested data to only reside on one Humio node (and on the ingest queue in Kafka). But this data would not be regarded as properly replicated until the second node returned. If the only node that was left handling a digest partition did a failover, Humio would end up going far back on the Kafka ingest queue to reread data. This has been changed. Now another node from the set of digest nodes will take over if a node goes offline, to keep the replication factor as desired. This means that other hosts, than those specified for a given digest partition on the cluster management page, can actually be handling the digest data for that partition. Only digest nodes will be selected as hosts.

Humio Server 1.14.5 LTS (2020-10-21)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.14.5LTS2020-10-21

Cloud

2021-08-31No1.12.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.14.0, 1.14.1, 1.14.2, 1.14.3, 1.14.4

Bug Fixes and New Metric

Fixed in this release

  • Summary

    • Fixed a problem where too many segments could be generated when restarting nodes.

    • Fixed an issue where Humio could behave incompatibly with Kafka versions prior to 2.3.0 if KAFKA_MANAGED_BY_HUMIO was true.

    • Fix missing cache update when deleting a view.

    • Changed limits for what can be fetched via HTTP from inside Humio.

    • Changed the query scheduling to account for the work of the overall query, rather than per job started. This allows fairer scheduling of queries hitting many dataspaces e.g. when using search-all.

    • Improve naming of threads to get more usable thread dumps.

    • Fixed a race condition when cleaning up datasources.

    • Log Humio cluster version in non-sensitive log.

    • The job for updating the IP location database now uses the configured HTTP proxy, if present.

    • Add logging to detect issues when truncating finished files.

    • New metrics for scheduling of queries:

      • local-query-jobs-wait: Histogram of time in milliseconds that each query waited between getting any work done including exports

      • local-query-jobs-queue: Count queries currently queued or active on node including exports

      • local-query-segments-queue-exports-part: Count of elements in queue as number of segments currently queued for query for exports

      • local-query-jobs-queue-exports-part: Count queries currently queued or active on node for exports

    • Improve performance when processing streaming queries.

    • Added log rotation for humio-non-sensitive logs.

    • Change priorities when fetching segments to a node which have been offline for a longer period. This avoids waiting too long before the cluster becomes fully synced.

    • Include user email in metrics when queries end.

    • Fixed a problem where some deleted segments could show up as missing.

    • Fixed an issue where Humio might attempt to write a larger message to Kafka than what Kafka allows.

    • Remove restriction on expire time when creating emergency user through the emergency user API. See Enabling Emergency Access.

    • Remove restriction on length of group names from LDAP.

    • Fixed an issue where a slow data stream could cause Humio to retain more data in Kafka than necessary, as well as cause a restarted Humio node to reprocess too much data.

    • Fixed a problem where duplicated uploaded files would not be deleted from /tmp.

    • Improved handling of data replication when nodes are offline.

    • Avoid overloading Kafka with updates for the global database by collecting operations in bulk.

    • Improve handling of sub-queries polling state from the main query when using join().

    • Added new metric jvm-hiccup for measuring stalls/pauses in the JVM.

    • Fixed a problem where segments could be downloaded to stateless frontend nodes from Bucket storage.

    • Fixed several cases where Humio might attempt to write a larger message to Kafka than Kafka allows.

    • HEC endpoint is now strictly validated as documented for top-level fields, which means non-valid input will be rejected. See Ingesting with HTTP Event Collector (HEC).

  • Configuration

    • Improved handling of query restarts to avoid unnecessary restarts in some scenarios.

    • Handling of digest in the case where a node has been offline for a long time has been improved. As an example, running a Humio cluster with a replication factor of 2 and having one node go offline for a long time would leave some ingested data to only reside on one Humio node (and on the ingest queue in Kafka). But this data would not be regarded as properly replicated until the second node returned. If the only node that was left handling a digest partition did a failover, Humio would end up going far back on the Kafka ingest queue to reread data. This has been changed. Now another node from the set of digest nodes will take over if a node goes offline, to keep the replication factor as desired. This means that other hosts, than those specified for a given digest partition on the cluster management page, can actually be handling the digest data for that partition. Only digest nodes will be selected as hosts.

Humio Server 1.14.4 LTS (2020-10-09)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.14.4LTS2020-10-09

Cloud

2021-08-31No1.12.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.14.0, 1.14.1, 1.14.2, 1.14.3

Bug Fixes and Stability Enhancements

Fixed in this release

  • Summary

    • Fixed a problem where too many segments could be generated when restarting nodes.

    • Fix missing cache update when deleting a view.

    • Changed limits for what can be fetched via HTTP from inside Humio.

    • Changed the query scheduling to account for the work of the overall query, rather than per job started. This allows fairer scheduling of queries hitting many dataspaces e.g. when using search-all.

    • Improve naming of threads to get more usable thread dumps.

    • Fixed a race condition when cleaning up datasources.

    • Log Humio cluster version in non-sensitive log.

    • The job for updating the IP location database now uses the configured HTTP proxy, if present.

    • Add logging to detect issues when truncating finished files.

    • New metrics for scheduling of queries:

      • local-query-jobs-wait: Histogram of time in milliseconds that each query waited between getting any work done including exports

      • local-query-jobs-queue: Count queries currently queued or active on node including exports

      • local-query-segments-queue-exports-part: Count of elements in queue as number of segments currently queued for query for exports

      • local-query-jobs-queue-exports-part: Count queries currently queued or active on node for exports

    • Improve performance when processing streaming queries.

    • Added log rotation for humio-non-sensitive logs.

    • Change priorities when fetching segments to a node which have been offline for a longer period. This avoids waiting too long before the cluster becomes fully synced.

    • Include user email in metrics when queries end.

    • Fixed a problem where some deleted segments could show up as missing.

    • Remove restriction on expire time when creating emergency user through the emergency user API. See Enabling Emergency Access.

    • Remove restriction on length of group names from LDAP.

    • Fixed an issue where a slow data stream could cause Humio to retain more data in Kafka than necessary, as well as cause a restarted Humio node to reprocess too much data.

    • Fixed a problem where duplicated uploaded files would not be deleted from /tmp.

    • Improved handling of data replication when nodes are offline.

    • Avoid overloading Kafka with updates for the global database by collecting operations in bulk.

    • Improve handling of sub-queries polling state from the main query when using join().

    • Fixed a problem where segments could be downloaded to stateless frontend nodes from Bucket storage.

    • Fixed several cases where Humio might attempt to write a larger message to Kafka than Kafka allows.

    • HEC endpoint is now strictly validated as documented for top-level fields, which means non-valid input will be rejected. See Ingesting with HTTP Event Collector (HEC).

  • Configuration

    • Improved handling of query restarts to avoid unnecessary restarts in some scenarios.

    • Handling of digest in the case where a node has been offline for a long time has been improved. As an example, running a Humio cluster with a replication factor of 2 and having one node go offline for a long time would leave some ingested data to only reside on one Humio node (and on the ingest queue in Kafka). But this data would not be regarded as properly replicated until the second node returned. If the only node that was left handling a digest partition did a failover, Humio would end up going far back on the Kafka ingest queue to reread data. This has been changed. Now another node from the set of digest nodes will take over if a node goes offline, to keep the replication factor as desired. This means that other hosts, than those specified for a given digest partition on the cluster management page, can actually be handling the digest data for that partition. Only digest nodes will be selected as hosts.

Humio Server 1.14.3 LTS (2020-09-24)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.14.3LTS2020-09-24

Cloud

2021-08-31No1.12.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.14.0, 1.14.1, 1.14.2

Bug Fixes and Improved Query Scheduling

Fixed in this release

  • Summary

    • Fixed a problem where too many segments could be generated when restarting nodes.

    • Fix missing cache update when deleting a view.

    • Changed the query scheduling to account for the work of the overall query, rather than per job started. This allows fairer scheduling of queries hitting many dataspaces e.g. when using search-all.

    • Improve naming of threads to get more usable thread dumps.

    • Fixed a race condition when cleaning up datasources.

    • The job for updating the IP location database now uses the configured HTTP proxy, if present.

    • Add logging to detect issues when truncating finished files.

    • New metrics for scheduling of queries:

      • local-query-jobs-wait: Histogram of time in milliseconds that each query waited between getting any work done including exports

      • local-query-jobs-queue: Count queries currently queued or active on node including exports

      • local-query-segments-queue-exports-part: Count of elements in queue as number of segments currently queued for query for exports

      • local-query-jobs-queue-exports-part: Count queries currently queued or active on node for exports

    • Improve performance when processing streaming queries.

    • Added log rotation for humio-non-sensitive logs.

    • Change priorities when fetching segments to a node which have been offline for a longer period. This avoids waiting too long before the cluster becomes fully synced.

    • Include user email in metrics when queries end.

    • Remove restriction on expire time when creating emergency user through the emergency user API. See Enabling Emergency Access.

    • Remove restriction on length of group names from LDAP.

    • Fixed an issue where a slow data stream could cause Humio to retain more data in Kafka than necessary, as well as cause a restarted Humio node to reprocess too much data.

    • Improved handling of data replication when nodes are offline.

    • Improve handling of sub-queries polling state from the main query when using join().

    • Fixed a problem where segments could be downloaded to stateless frontend nodes from Bucket storage.

    • HEC endpoint is now strictly validated as documented for top-level fields, which means non-valid input will be rejected. See Ingesting with HTTP Event Collector (HEC).

  • Configuration

    • Improved handling of query restarts to avoid unnecessary restarts in some scenarios.

    • Handling of digest in the case where a node has been offline for a long time has been improved. As an example, running a Humio cluster with a replication factor of 2 and having one node go offline for a long time would leave some ingested data to only reside on one Humio node (and on the ingest queue in Kafka). But this data would not be regarded as properly replicated until the second node returned. If the only node that was left handling a digest partition did a failover, Humio would end up going far back on the Kafka ingest queue to reread data. This has been changed. Now another node from the set of digest nodes will take over if a node goes offline, to keep the replication factor as desired. This means that other hosts, than those specified for a given digest partition on the cluster management page, can actually be handling the digest data for that partition. Only digest nodes will be selected as hosts.

Humio Server 1.14.2 LTS (2020-09-17)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.14.2LTS2020-09-17

Cloud

2021-08-31No1.12.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.14.0, 1.14.1

Bug Fixes, HEC Endpoint Validation and New Metrics

Fixed in this release

  • Summary

    • Fixed a problem where too many segments could be generated when restarting nodes.

    • Fixed a race condition when cleaning up datasources.

    • The job for updating the IP location database now uses the configured HTTP proxy, if present.

    • New metrics for scheduling of queries:

      • local-query-jobs-wait: Histogram of time in milliseconds that each query waited between getting any work done including exports

      • local-query-jobs-queue: Count queries currently queued or active on node including exports

      • local-query-segments-queue-exports-part: Count of elements in queue as number of segments currently queued for query for exports

      • local-query-jobs-queue-exports-part: Count queries currently queued or active on node for exports

    • Improve performance when processing streaming queries.

    • Added log rotation for humio-non-sensitive logs.

    • Include user email in metrics when queries end.

    • Remove restriction on expire time when creating emergency user through the emergency user API. See Enabling Emergency Access.

    • Remove restriction on length of group names from LDAP.

    • Improved handling of data replication when nodes are offline.

    • Fixed a problem where segments could be downloaded to stateless frontend nodes from Bucket storage.

    • HEC endpoint is now strictly validated as documented for top-level fields, which means non-valid input will be rejected. See Ingesting with HTTP Event Collector (HEC).

  • Configuration

    • Improved handling of query restarts to avoid unnecessary restarts in some scenarios.

    • Handling of digest in the case where a node has been offline for a long time has been improved. As an example, running a Humio cluster with a replication factor of 2 and having one node go offline for a long time would leave some ingested data to only reside on one Humio node (and on the ingest queue in Kafka). But this data would not be regarded as properly replicated until the second node returned. If the only node that was left handling a digest partition did a failover, Humio would end up going far back on the Kafka ingest queue to reread data. This has been changed. Now another node from the set of digest nodes will take over if a node goes offline, to keep the replication factor as desired. This means that other hosts, than those specified for a given digest partition on the cluster management page, can actually be handling the digest data for that partition. Only digest nodes will be selected as hosts.

Humio Server 1.14.1 LTS (2020-09-08)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.14.1LTS2020-09-08

Cloud

2021-08-31No1.12.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.14.0

Bug fixes and updates.

Fixed in this release

  • Summary

    • Improve performance when processing streaming queries.

    • Remove restriction on expire time when creating emergency user through the emergency user API. See Enabling Emergency Access.

    • Remove restriction on length of group names from LDAP.

  • Configuration

    • Improved handling of query restarts to avoid unnecessary restarts in some scenarios.

    • Handling of digest in the case where a node has been offline for a long time has been improved. As an example, running a Humio cluster with a replication factor of 2 and having one node go offline for a long time would leave some ingested data to only reside on one Humio node (and on the ingest queue in Kafka). But this data would not be regarded as properly replicated until the second node returned. If the only node that was left handling a digest partition did a failover, Humio would end up going far back on the Kafka ingest queue to reread data. This has been changed. Now another node from the set of digest nodes will take over if a node goes offline, to keep the replication factor as desired. This means that other hosts, than those specified for a given digest partition on the cluster management page, can actually be handling the digest data for that partition. Only digest nodes will be selected as hosts.

Humio Server 1.14.0 LTS (2020-08-26)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.14.0LTS2020-08-26

Cloud

2021-08-31No1.12.0No

Hide file hashes

Show file hashes

Bug fixes and updates.

Free Text Search, Load Balancing of Queries and TLS Support. This release promotes the latest 1.13 release from preview to stable. To see more details, go through the individual 1.13.x release notes (links in the changelog).

Free text search now searches all fields rather than only the @rawstring field.

Humio can now balance and reuse existing queries internally in the cluster. Load balancer configuration to achieve this is no longer needed. See Configuration Settings and Installing Using Containers.

TLS encrypts communication using TLS to/from ZooKeeper, Kafka, and other Humio nodes.

IPlocation Database Management Changed

The database used as data source for the ipLocation() query function must be updated within 30 days when a new version of the database is made public by MaxMind. To comply with this, the database is no longer shipped as part of the Humio artifacts but will either:

  • Be fetched automatically by Humio provided that Humio is allowed to connect to the db update service hosted by Humio. This is the default behaviour.

  • Have to be updated manually (See ipLocation() reference page).

If the database cannot be automatically updated and no database is provided manually, the ipLocation() query function will no longer work.

Controlling what nodes to use as query coordinators. Due to the load balancing in Humio, customers that previously relied on load balancing to control which nodes are query coordinators now need to set QUERY_COORDINATOR to false on nodes they do not want to become query coordinators. See Installing Using Containers and Configuration Settings.

Fixed in this release

  • Configuration

    • Improved handling of query restarts to avoid unnecessary restarts in some scenarios.

    • Handling of digest in the case where a node has been offline for a long time has been improved. As an example, running a Humio cluster with a replication factor of 2 and having one node go offline for a long time would leave some ingested data to only reside on one Humio node (and on the ingest queue in Kafka). But this data would not be regarded as properly replicated until the second node returned. If the only node that was left handling a digest partition did a failover, Humio would end up going far back on the Kafka ingest queue to reread data. This has been changed. Now another node from the set of digest nodes will take over if a node goes offline, to keep the replication factor as desired. This means that other hosts, than those specified for a given digest partition on the cluster management page, can actually be handling the digest data for that partition. Only digest nodes will be selected as hosts.

Humio Server 1.13.5 GA (2020-08-12)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.13.5GA2020-08-12

Cloud

2021-08-31No1.12.0Yes

Available for download two days after release.

Hide file hashes

Show file hashes

Security and Bug Fixes

Fixed in this release

  • Summary

    • export to file now allows for fieldnames with special characters.

    • missing migration of non-default groups would result in alerts failing until the user backing the alert logs in again.

    • This release fixes a security issue. More information will follow when Humio customers have had time to upgrade. See: Security Disclosures

    • export to file can now include query parameters

Humio Server 1.13.4 GA (2020-08-05)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.13.4GA2020-08-05

Cloud

2021-08-31No1.12.0Yes

Available for download two days after release.

Hide file hashes

Show file hashes

Security and Bug Fixes

Fixed in this release

  • Summary

    • Fix issue where a query could fail to search all segments if digest reassignment was occurring at the same time as the query.

    • Fix issue where a node with no digest assignment could fail to delete local segment copies in some cases.

    • This release fixes a security issue. For more information see: Security Disclosures

Humio Server 1.13.3 GA (2020-08-04)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.13.3GA2020-08-04

Cloud

2021-08-31No1.12.0Yes

Available for download two days after release.

Hide file hashes

Show file hashes

Security and Bug Fix

Fixed in this release

  • Summary

    • This release fixes a security issue. For more information see: Security Disclosures

    • avoid forbidden access error on shared dashboard links by ensuring correct use of time stamps

Humio Server 1.13.2 GA (2020-08-03)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.13.2GA2020-08-03

Cloud

2021-08-31No1.12.0Yes

Available for download two days after release.

Hide file hashes

Show file hashes

Bug Fixes

Fixed in this release

  • Summary

    • joins will now propagate limit warnings from sub queries to the main query

    • avoid saving invalid bucket storage configurations

    • all ingest methods now support the ALLOW_CHANGE_REPO_ON_EVENTS configuration parameter

    • make sure join-subqueries gets canceled when the main query is canceled

    • Default groups added

    • export to file no longer fails/timeouts on heavy sub queries

Humio Server 1.13.1 GA (2020-07-03)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.13.1GA2020-07-03

Cloud

2021-08-31No1.12.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Bug Fixes and Improved Search Speeds for Many-Core Systems

Fixed in this release

  • Summary

    • Support for a new storage format for segment files that will be introduced in a later release (to support rollback)

    • S3Archiving could write events twice in a special case (When a merge happens where all inputs have been archived, write in global that the merge-result was archived too).

    • Improved query scheduling on machines with many cores. This can improve search speeds significantly.

    • Bucket storage in GCP could did not clean up all tmp files

Humio Server 1.13.0 GA (2020-06-24)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.13.0GA2020-06-24

Cloud

2021-08-31No1.12.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Many improvements, including some related to free-text searching, load balancing queries, TLS support, IPlocation() query function, and some configuration changes.

Fixed in this release

  • Configuration

    • Humio can now balance and reuse existing queries internally in the cluster. See Configuration Settings.

    • The data source for the ipLocation() query function is no longer shipped with humio but installed/updated separately.

    • Free text search now searches all fields rather than only @rawstring.

    • Added support for WebIdentityTokenCredentialsProvider on AWS.

    • Introduced a new ChangeViewOrRepositoryDescription permission for editing the description of a view or repository. This was previously tied to ConnectView and any user with that permission will now have the new permission as well.

    • Internal communication in a Humio installation can now be encrypted using TLS. See TLS.

Improvement

  • Configuration

    • Controlling what nodes to use as query coordinators. Due to the load balancing in Humio, customers that previously relied on load balancing to control which nodes are query coordinators now need to set QUERY_COORDINATOR to false on nodes they do not want to become query coordinators. See Configuration Settings and Installing Using Containers.

  • Other

    • Humio can now balance and reuse existing queries internally in the cluster. Load balancer configuration to achieve this is no longer needed. See Configuration Settings and Installing Using Containers.

    • Free text search now searches all fields rather than only @rawstring.

    • TLS encrypts communication using TLS to/from ZooKeeper, Kafka, and other Humio nodes.

    • The database used as data source for the ipLocation( ) query function must be updated within 30 days when a new version of the database is made public by MaxMind. To comply with this, the database is no longer shipped as part of the humio artifacts but will either:

      • Be fetched automatically by Humio provided that Humio is allowed to connect to the db update service hosted by Humio. This is the default behaviour.

      • Have to be updated manually (see ipLocation() reference page).

      If the database cannot be automatically updated and no database is provided manually, the ipLocation() query function will no longer work.

Humio Server 1.12.7 LTS (2020-09-17)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.12.7LTS2020-09-17

Cloud

2021-06-30No1.10.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.12.0, 1.12.1, 1.12.2, 1.12.3, 1.12.4, 1.12.5, 1.12.6

Bug Fix and Additional Metrics

Fixed in this release

  • Summary

    • Fixed a race condition when cleaning up datasources

    • This release fixes a security issue. More information will follow when Humio customers have had time to upgrade. See Security Disclosures

    • Fixed an issue with CSP that could cause the Humio UI to freeze on Safari browsers

    • Fixed an issue where queries using lookahead in regex would fail to parse - "invalid or unsupported Perl syntax"

    • S3Archiving could write events twice in a special case (When a merge happens where all inputs have been archived, write in global that the merge-result was archived too).

    • Fixed an issue where events could be skipped even though they should not, for queries containing field-aliasing (e.g., a:=b) and using subsequent checks on the aliasing field.

    • Remove restriction on length of group names from LDAP.

    • Fix issue where a node with no digest assignment could fail to delete local segment copies in some cases.

    • missing migration of non-default groups would result in alerts failing until the user backing the alert logs in again.

    • This release fixes a security issue. For more information see: Security Disclosures

    • Improved handling of query restarts to avoid unnecessary restarts in some scenarios.

    • Fixed an issue with SAML IDPs requiring query parameters to be passed via the configuration SAML_IDP_SIGN_ON_URL

    • Improved query scheduling on machines with many cores. This can improve search speeds significantly.

    • Support for a new storage format for segment files that will be introduced in a later release (to support rollback)

    • avoid forbidden access error on shared dashboard links by ensuring correct use of time stamps.

    • New metrics for scheduling of queries:

      • local-query-jobs-wait: Histogram of time in milliseconds that each query waited between getting any work done including exports

      • local-query-jobs-queue: Count queries currently queued or active on node including exports

      • local-query-segments-queue-exports-part: Count of elements in queue as number of segments currently queued for query for exports

      • local-query-jobs-queue-exports-part: Count queries currently queued or active on node for exports

    • Bucket storage in GCP could did not clean up all tmp files

    • Fixed an issue that prevented deletion of unused objects in bucket storage, if the bucket contained .0 millions of objects or more

    • This release fixes a security issue. For more information see: Security Disclosures

  • Other

    • Other changes: (see 1.11.1 release notes)

    • Major changes: (see 1.11.0 release notes)

Humio Server 1.12.6 LTS (2020-09-03)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.12.6LTS2020-09-03

Cloud

2021-06-30No1.10.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.12.0, 1.12.1, 1.12.2, 1.12.3, 1.12.4, 1.12.5

Bug Fixes

Fixed in this release

  • Summary

    • This release fixes a security issue. More information will follow when Humio customers have had time to upgrade. See Security Disclosures

    • Fixed an issue with CSP that could cause the Humio UI to freeze on Safari browsers

    • Fixed an issue where queries using lookahead in regex would fail to parse - "invalid or unsupported Perl syntax"

    • S3Archiving could write events twice in a special case (When a merge happens where all inputs have been archived, write in global that the merge-result was archived too).

    • Fixed an issue where events could be skipped even though they should not, for queries containing field-aliasing (e.g., a:=b) and using subsequent checks on the aliasing field.

    • Remove restriction on length of group names from LDAP.

    • Fix issue where a node with no digest assignment could fail to delete local segment copies in some cases.

    • missing migration of non-default groups would result in alerts failing until the user backing the alert logs in again.

    • This release fixes a security issue. For more information see: Security Disclosures

    • Improved handling of query restarts to avoid unnecessary restarts in some scenarios.

    • Fixed an issue with SAML IDPs requiring query parameters to be passed via the configuration SAML_IDP_SIGN_ON_URL

    • Improved query scheduling on machines with many cores. This can improve search speeds significantly.

    • Support for a new storage format for segment files that will be introduced in a later release (to support rollback)

    • avoid forbidden access error on shared dashboard links by ensuring correct use of time stamps.

    • Bucket storage in GCP could did not clean up all tmp files

    • Fixed an issue that prevented deletion of unused objects in bucket storage, if the bucket contained .0 millions of objects or more

    • This release fixes a security issue. For more information see: Security Disclosures

  • Other

    • Other changes: (see 1.11.1 release notes)

    • Major changes: (see 1.11.0 release notes)

Humio Server 1.12.5 LTS (2020-08-12)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.12.5LTS2020-08-12

Cloud

2021-06-30No1.10.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.12.0, 1.12.1, 1.12.2, 1.12.3, 1.12.4

Security and Bug Fixes

Fixed in this release

  • Summary

    • This release fixes a security issue. More information will follow when Humio customers have had time to upgrade. See Security Disclosures

    • Fixed an issue with CSP that could cause the Humio UI to freeze on Safari browsers

    • Fixed an issue where queries using lookahead in regex would fail to parse - "invalid or unsupported Perl syntax"

    • S3Archiving could write events twice in a special case (When a merge happens where all inputs have been archived, write in global that the merge-result was archived too).

    • Fixed an issue where events could be skipped even though they should not, for queries containing field-aliasing (e.g., a:=b) and using subsequent checks on the aliasing field.

    • Fix issue where a node with no digest assignment could fail to delete local segment copies in some cases.

    • missing migration of non-default groups would result in alerts failing until the user backing the alert logs in again.

    • This release fixes a security issue. For more information see: Security Disclosures

    • Fixed an issue with SAML IDPs requiring query parameters to be passed via the configuration SAML_IDP_SIGN_ON_URL

    • Improved query scheduling on machines with many cores. This can improve search speeds significantly.

    • Support for a new storage format for segment files that will be introduced in a later release (to support rollback)

    • avoid forbidden access error on shared dashboard links by ensuring correct use of time stamps.

    • Bucket storage in GCP could did not clean up all tmp files

    • Fixed an issue that prevented deletion of unused objects in bucket storage, if the bucket contained .0 millions of objects or more

    • This release fixes a security issue. For more information see: Security Disclosures

  • Other

    • Other changes: (see 1.11.1 release notes)

    • Major changes: (see 1.11.0 release notes)

Humio Server 1.12.4 LTS (2020-08-05)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.12.4LTS2020-08-05

Cloud

2021-06-30No1.10.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.12.0, 1.12.1, 1.12.2, 1.12.3

Security Fix

Fixed in this release

  • Summary

    • Fixed an issue with CSP that could cause the Humio UI to freeze on Safari browsers

    • Fixed an issue where queries using lookahead in regex would fail to parse - "invalid or unsupported Perl syntax"

    • S3Archiving could write events twice in a special case (When a merge happens where all inputs have been archived, write in global that the merge-result was archived too).

    • Fixed an issue where events could be skipped even though they should not, for queries containing field-aliasing (e.g., a:=b) and using subsequent checks on the aliasing field.

    • This release fixes a security issue. For more information see: Security Disclosures

    • Fixed an issue with SAML IDPs requiring query parameters to be passed via the configuration SAML_IDP_SIGN_ON_URL

    • Improved query scheduling on machines with many cores. This can improve search speeds significantly.

    • Support for a new storage format for segment files that will be introduced in a later release (to support rollback)

    • Bucket storage in GCP could did not clean up all tmp files

    • Fixed an issue that prevented deletion of unused objects in bucket storage, if the bucket contained .0 millions of objects or more

    • This release fixes a security issue. For more information see: Security Disclosures

  • Other

    • Other changes: (see 1.11.1 release notes)

    • Major changes: (see 1.11.0 release notes)

Humio Server 1.12.3 LTS (2020-08-04)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.12.3LTS2020-08-04

Cloud

2021-06-30No1.10.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.12.0, 1.12.1, 1.12.2

Security Fix

Fixed in this release

  • Summary

    • Fixed an issue with CSP that could cause the Humio UI to freeze on Safari browsers

    • Fixed an issue where queries using lookahead in regex would fail to parse - "invalid or unsupported Perl syntax"

    • S3Archiving could write events twice in a special case (When a merge happens where all inputs have been archived, write in global that the merge-result was archived too).

    • Fixed an issue where events could be skipped even though they should not, for queries containing field-aliasing (e.g., a:=b) and using subsequent checks on the aliasing field.

    • This release fixes a security issue. For more information see: Security Disclosures

    • Fixed an issue with SAML IDPs requiring query parameters to be passed via the configuration SAML_IDP_SIGN_ON_URL

    • Improved query scheduling on machines with many cores. This can improve search speeds significantly.

    • Support for a new storage format for segment files that will be introduced in a later release (to support rollback)

    • Bucket storage in GCP could did not clean up all tmp files

    • Fixed an issue that prevented deletion of unused objects in bucket storage, if the bucket contained .0 millions of objects or more

  • Other

    • Other changes: (see 1.11.1 release notes)

    • Major changes: (see 1.11.0 release notes)

Humio Server 1.12.2 LTS (2020-07-03)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.12.2LTS2020-07-03

Cloud

2021-06-30No1.10.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.12.0, 1.12.1

Bug Fixes and Improved Search Speeds for Many-Core Systems

Fixed in this release

  • Summary

    • Fixed an issue with CSP that could cause the Humio UI to freeze on Safari browsers

    • Fixed an issue where queries using lookahead in regex would fail to parse - "invalid or unsupported Perl syntax"

    • S3Archiving could write events twice in a special case (When a merge happens where all inputs have been archived, write in global that the merge-result was archived too).

    • Fixed an issue where events could be skipped even though they should not, for queries containing field-aliasing (e.g., a:=b) and using subsequent checks on the aliasing field.

    • Fixed an issue with SAML IDPs requiring query parameters to be passed via the configuration SAML_IDP_SIGN_ON_URL

    • Improved query scheduling on machines with many cores. This can improve search speeds significantly.

    • Support for a new storage format for segment files that will be introduced in a later release (to support rollback)

    • Bucket storage in GCP could did not clean up all tmp files

    • Fixed an issue that prevented deletion of unused objects in bucket storage, if the bucket contained .0 millions of objects or more

  • Other

    • Other changes: (see 1.11.1 release notes)

    • Major changes: (see 1.11.0 release notes)

Humio Server 1.12.1 LTS (2020-06-24)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.12.1LTS2020-06-24

Cloud

2021-06-30No1.10.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.12.0

Bug Fixes: Safari Freeze, SAML, Bucket Storage Clean-Up, Regex and Field-Aliasing

Fixed in this release

  • Summary

    • Fixed an issue with CSP that could cause the Humio UI to freeze on Safari browsers

    • Fixed an issue where queries using lookahead in regex would fail to parse - "invalid or unsupported Perl syntax"

    • Fixed an issue where events could be skipped even though they should not, for queries containing field-aliasing (e.g., a:=b) and using subsequent checks on the aliasing field.

    • Fixed an issue with SAML IDPs requiring query parameters to be passed via the configuration SAML_IDP_SIGN_ON_URL

    • Fixed an issue that prevented deletion of unused objects in bucket storage, if the bucket contained .0 millions of objects or more

  • Other

    • Other changes: (see 1.11.1 release notes)

    • Major changes: (see 1.11.0 release notes)

Humio Server 1.12.0 LTS (2020-06-09)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.12.0LTS2020-06-09

Cloud

2021-06-30No1.10.0No

Hide file hashes

Show file hashes

Export to Bucket, findTimestamp(), selfJoin(), Emergency User Sub-System This release promotes the 1.11 releases from preview to stable. To see more details go through the individual 1.11.x release notes (links in the changelog).

The selfJoin() query function allows selecting log lines that share an identifier; for which there exists (separate) log lines that match a certain filtering criteria; such as "all log lines with a given userid for which there exists a successful and an unsuccessful login".

The findTimestamp() query function will try to find and parse timestamps in incoming data. The function should be used in parsers and support automatic detection of timestamps. It can be used instead of making regular expressions specifying where to find the timestamp and parsing it with parseTimestamp(). Checkout the findTimestamp() for details.

As an alternative to downloading streaming queries directly, Humio can now upload them to an S3 or GCS bucket from which the user can download the data. See Data Storage, Buckets and Archiving.

If there are issues with the identity provider that Humio is configured to use, it might not be possible to log in to Humio. To mitigate this, Humio now provides emergency users that can be created locally within the Humio cluster. See Enabling Emergency Access.

Fluent Bit users might need to change the Fluent Bit configuration. To ensure compatibility with the newest Beats clients, the Elastic Bulk API has been changed to always return the full set of status information for all operations, as it is done in the official Elastic API. This can however cause problems when using Fluent Bit to ingest data into Humio.

Fluent Bit in default configuration uses a small buffer (4KB) for responses from the Elastic Bulk API, which causes problems when enough operations are bulked together. The response will then be larger than the response buffer as it contains the status for each individual operation. Make sure the response buffer is large enough, otherwise Fluent Bit will stop shipping data. See: https://github.com/fluent/fluent-bit/issues/2156 and https://docs.fluentbit.io/manual/pipeline/outputs/elasticsearch

Fixed in this release

  • Other

    • Other changes: (see 1.11.1 release notes)

    • Major changes: (see 1.11.0 release notes)

Humio Server 1.11.1 GA (2020-05-28)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.11.1GA2020-05-28

Cloud

2021-06-30No1.10.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Bug Fixes and Memory Optimizations

Fixed in this release

  • Other

    • Dashboard widgets now display an error if data is not compatible with the widget

    • Several improvements to memory handling

    • Several improvements to query error handling

    • Elastic Bulk API change

Known Issues

  • Other

    • Fluent Bit users might need to change the Fluent Bit configuration. To ensure compatibility with the newest Beats clients, the Elastic Bulk API has been changed to always return the full set of status information for all operations, as it is done in the official Elastic API.

      This can however cause problems when using Fluent Bit to ingest data into Humio.

      Fluent Bit in default configuration uses a small buffer (4KB) for responses from the Elastic Bulk API, which causes problems when enough operations are bulked together. The response will then be larger than the response buffer as it contains the status for each individual operation. Make sure the response buffer is large enough, otherwise Fluent Bit will stop shipping data. See: https://github.com/fluent/fluent-bit/issues/2156 and https://docs.fluentbit.io/manual/pipeline/outputs/elasticsearch

Humio Server 1.11.0 GA (2020-05-19)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.11.0GA2020-05-19

Cloud

2021-06-30No1.10.0Yes

Available for download two days after release.

Hide file hashes

Show file hashes

Export to Bucket, findTimestamp(), selfJoin(), Emergency User Sub-System

The selfJoin() query function allows selecting log lines that share an identifier; for which there exists (separate) log lines that match a certain filtering criteria; such as "all log lines with a given userid for which there exists a successful and an unsuccessful login".

The findTimestamp() query function will try to find and parse timestamps in incoming data. The function should be used in parsers and support automatic detection of timestamps. It can be used instead of making regular expressions specifying where to find the timestamp and parsing it with parseTimestamp(). See the findTimestamp() reference page for details.

As an alternative to downloading streaming queries directly, Humio can now upload them to an S3 or GCS bucket from which the user can download the data. See Bucket Storage.

If there are issues with the identity provider that Humio is configured to use, it might not be possible to log in to Humio. To mitigate this, Humio now provides emergency users that can be created locally within the Humio cluster. See Enabling Emergency Access.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

Fixed in this release

Humio Server 1.10.9 LTS (2020-08-05)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.10.9LTS2020-08-05

Cloud

2021-04-30No1.10.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.10.0, 1.10.1, 1.10.2, 1.10.3, 1.10.4, 1.10.5, 1.10.6, 1.10.7, 1.10.8

Security Fix

Fixed in this release

  • Summary

    • A couple of memory leaks have been found and fixed.

    • Fixed an issue with CSP that could cause the Humio UI to freeze on Safari browsers

    • New metric: "query-delta-cost": 30s delta cost on queries per repo, for the entire cluster.

    • S3Archiving could write events twice in a special case (When a merge happens where all inputs have been archived, write in global that the merge-result was archived too).

    • Fixed an issue where a query could get a "Result is partial" warning when the query took more then 15 minutes to complete while a merge of segments addressed by the query happened in the background

    • Better sorting when computing query prefixes in order to reuse queries.

    • This release fixes a security issue. For more information see: Security Disclosures

    • Improvements made to speed of frontpage loading. Noticeable for customers with many repositories and groups.

    • New internal jobs logging system stats: Search for #type=humio | NonSensitive | groupby(kind) to see them.

    • Fixed an issue with SAML IDPs requiring query parameters to be passed via the configuration SAML_IDP_SIGN_ON_URL

    • Autocreate users on login when synchronizing groups with external provider.

    • Fixed an issue that prevented deletion of unused objects in bucket storage, if the bucket contained .0 millions of objects or more

    • Bucket storage in GCP could did not clean up all tmp files

    • An issue could result in malformed messages being put into the ingest queue. This version is able to read and skip such messages. The issue causing such malformed messages has been fixed.

    • This release fixes a security issue. For more information see: Security Disclosures

    • Fixed an issue where long running queries started as part of an export, or by calls to the /query API would time out

    • Thread pools have been reorganized to require fewer threads and threads have been given new names.

    • Paging in UI. administration/Users & Permissions.

    • Support for a new storage format for segment files that will be introduced in a later release (to support rollback)

    • Memory requirements set using -XX:MaxDirectMemorySize is much lower now. Suggested value is ((#vCpu+3)/4) GB.

    • Improved query scheduling on machines with many cores. This can improve search speeds significantly.

    • Fixed a number of issues with export and alerts in the humio-search-all repository.

    • Improved protocol within cluster for submitting queries to allow faster start of queries on huge repositories.

    • This release fixes a security issue. For more information see: Security Disclosures

    • Humio search all repo interaction with alerts and notifiers.

    • Fixed a number of bugs that would cause long-running queries using join, selfJoin or selfJoinFilter to timeout or fail

  • Other

    • Dealing with missing data points in timecharts

    • Add Role Based Access Control (RBAC) to the Humio UI

    • New line interpolation options

    • Support for controlling color and title in widgets

    • Several improvements to Query Functions

    • NetFlow support extended to also support IPFIX.

    • Added humio Health Check APIs

    • Time Chart series roll-up

    • Linear interpolation now default. New interpolation type: Basis

    • Replaces chart library with Vega, can be disabled using the ENABLE_VEGA_CHARTS=false flag.

    • Control widget styling directly from dashboards

    • Chart styling support (Pie, Bar)

Humio Server 1.10.8 LTS (2020-08-04)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.10.8LTS2020-08-04

Cloud

2021-04-30No1.10.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.10.0, 1.10.1, 1.10.2, 1.10.3, 1.10.4, 1.10.5, 1.10.6, 1.10.7

Security Fix

Fixed in this release

  • Summary

    • A couple of memory leaks have been found and fixed.

    • Fixed an issue with CSP that could cause the Humio UI to freeze on Safari browsers

    • New metric: "query-delta-cost": 30s delta cost on queries per repo, for the entire cluster.

    • S3Archiving could write events twice in a special case (When a merge happens where all inputs have been archived, write in global that the merge-result was archived too).

    • Fixed an issue where a query could get a "Result is partial" warning when the query took more then 15 minutes to complete while a merge of segments addressed by the query happened in the background

    • Better sorting when computing query prefixes in order to reuse queries.

    • This release fixes a security issue. For more information see: Security Disclosures

    • Improvements made to speed of frontpage loading. Noticeable for customers with many repositories and groups.

    • New internal jobs logging system stats: Search for #type=humio | NonSensitive | groupby(kind) to see them.

    • Fixed an issue with SAML IDPs requiring query parameters to be passed via the configuration SAML_IDP_SIGN_ON_URL

    • Autocreate users on login when synchronizing groups with external provider.

    • Fixed an issue that prevented deletion of unused objects in bucket storage, if the bucket contained .0 millions of objects or more

    • Bucket storage in GCP could did not clean up all tmp files

    • An issue could result in malformed messages being put into the ingest queue. This version is able to read and skip such messages. The issue causing such malformed messages has been fixed.

    • Fixed an issue where long running queries started as part of an export, or by calls to the /query API would time out

    • Thread pools have been reorganized to require fewer threads and threads have been given new names.

    • Paging in UI. administration/Users & Permissions.

    • Support for a new storage format for segment files that will be introduced in a later release (to support rollback)

    • Memory requirements set using -XX:MaxDirectMemorySize is much lower now. Suggested value is ((#vCpu+3)/4) GB.

    • Improved query scheduling on machines with many cores. This can improve search speeds significantly.

    • Fixed a number of issues with export and alerts in the humio-search-all repository.

    • Improved protocol within cluster for submitting queries to allow faster start of queries on huge repositories.

    • This release fixes a security issue. For more information see: Security Disclosures

    • Humio search all repo interaction with alerts and notifiers.

    • Fixed a number of bugs that would cause long-running queries using join, selfJoin or selfJoinFilter to timeout or fail

  • Other

    • Dealing with missing data points in timecharts

    • Add Role Based Access Control (RBAC) to the Humio UI

    • New line interpolation options

    • Support for controlling color and title in widgets

    • Several improvements to Query Functions

    • NetFlow support extended to also support IPFIX.

    • Added humio Health Check APIs

    • Time Chart series roll-up

    • Linear interpolation now default. New interpolation type: Basis

    • Replaces chart library with Vega, can be disabled using the ENABLE_VEGA_CHARTS=false flag.

    • Control widget styling directly from dashboards

    • Chart styling support (Pie, Bar)

Humio Server 1.10.7 LTS (2020-07-03)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.10.7LTS2020-07-03

Cloud

2021-04-30No1.8.5No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.10.0, 1.10.1, 1.10.2, 1.10.3, 1.10.4, 1.10.5, 1.10.6

Bug Fixes and Improved Search Speeds for Many-Core Systems

Fixed in this release

  • Summary

    • A couple of memory leaks have been found and fixed.

    • Fixed an issue with CSP that could cause the Humio UI to freeze on Safari browsers

    • New metric: "query-delta-cost": 30s delta cost on queries per repo, for the entire cluster.

    • S3Archiving could write events twice in a special case (When a merge happens where all inputs have been archived, write in global that the merge-result was archived too).

    • Fixed an issue where a query could get a "Result is partial" warning when the query took more then 15 minutes to complete while a merge of segments addressed by the query happened in the background

    • Better sorting when computing query prefixes in order to reuse queries.

    • This release fixes a security issue. For more information see: Security Disclosures

    • Improvements made to speed of frontpage loading. Noticeable for customers with many repositories and groups.

    • New internal jobs logging system stats: Search for #type=humio | NonSensitive | groupby(kind) to see them.

    • Fixed an issue with SAML IDPs requiring query parameters to be passed via the configuration SAML_IDP_SIGN_ON_URL

    • Autocreate users on login when synchronizing groups with external provider.

    • Fixed an issue that prevented deletion of unused objects in bucket storage, if the bucket contained .0 millions of objects or more

    • Bucket storage in GCP could did not clean up all tmp files

    • An issue could result in malformed messages being put into the ingest queue. This version is able to read and skip such messages. The issue causing such malformed messages has been fixed.

    • Fixed an issue where long running queries started as part of an export, or by calls to the /query API would time out

    • Thread pools have been reorganized to require fewer threads and threads have been given new names.

    • Paging in UI. administration/Users & Permissions.

    • Support for a new storage format for segment files that will be introduced in a later release (to support rollback)

    • Memory requirements set using -XX:MaxDirectMemorySize is much lower now. Suggested value is ((#vCpu+3)/4) GB.

    • Improved query scheduling on machines with many cores. This can improve search speeds significantly.

    • Fixed a number of issues with export and alerts in the humio-search-all repository.

    • Improved protocol within cluster for submitting queries to allow faster start of queries on huge repositories.

    • Humio search all repo interaction with alerts and notifiers.

    • Fixed a number of bugs that would cause long-running queries using join, selfJoin or selfJoinFilter to timeout or fail

  • Other

    • Dealing with missing data points in timecharts

    • Add Role Based Access Control (RBAC) to the Humio UI

    • New line interpolation options

    • Support for controlling color and title in widgets

    • Several improvements to Query Functions

    • NetFlow support extended to also support IPFIX.

    • Added humio Health Check APIs

    • Time Chart series roll-up

    • Linear interpolation now default. New interpolation type: Basis

    • Replaces chart library with Vega, can be disabled using the ENABLE_VEGA_CHARTS=false flag.

    • Control widget styling directly from dashboards

    • Chart styling support (Pie, Bar)

Humio Server 1.10.6 LTS (2020-06-24)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.10.6LTS2020-06-24

Cloud

2021-04-30No1.8.5No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.10.0, 1.10.1, 1.10.2, 1.10.3, 1.10.4, 1.10.5

Bug Fixes: Safari Freeze, SAML and Bucket Storage Clean-Up

Fixed in this release

  • Summary

    • A couple of memory leaks have been found and fixed.

    • Fixed an issue with CSP that could cause the Humio UI to freeze on Safari browsers

    • New metric: "query-delta-cost": 30s delta cost on queries per repo, for the entire cluster.

    • Fixed an issue where a query could get a "Result is partial" warning when the query took more then 15 minutes to complete while a merge of segments addressed by the query happened in the background

    • Better sorting when computing query prefixes in order to reuse queries.

    • This release fixes a security issue. For more information see: Security Disclosures

    • Improvements made to speed of frontpage loading. Noticeable for customers with many repositories and groups.

    • New internal jobs logging system stats: Search for #type=humio | NonSensitive | groupby(kind) to see them.

    • Fixed an issue with SAML IDPs requiring query parameters to be passed via the configuration SAML_IDP_SIGN_ON_URL

    • Autocreate users on login when synchronizing groups with external provider.

    • Fixed an issue that prevented deletion of unused objects in bucket storage, if the bucket contained .0 millions of objects or more

    • An issue could result in malformed messages being put into the ingest queue. This version is able to read and skip such messages. The issue causing such malformed messages has been fixed.

    • Fixed an issue where long running queries started as part of an export, or by calls to the /query API would time out

    • Thread pools have been reorganized to require fewer threads and threads have been given new names.

    • Paging in UI. administration/Users & Permissions.

    • Memory requirements set using -XX:MaxDirectMemorySize is much lower now. Suggested value is ((#vCpu+3)/4) GB.

    • Fixed a number of issues with export and alerts in the humio-search-all repository.

    • Improved protocol within cluster for submitting queries to allow faster start of queries on huge repositories.

    • Humio search all repo interaction with alerts and notifiers.

    • Fixed a number of bugs that would cause long-running queries using join, selfJoin or selfJoinFilter to timeout or fail

  • Other

    • Dealing with missing data points in timecharts

    • Add Role Based Access Control (RBAC) to the Humio UI

    • New line interpolation options

    • Support for controlling color and title in widgets

    • Several improvements to Query Functions

    • NetFlow support extended to also support IPFIX.

    • Added humio Health Check APIs

    • Time Chart series roll-up

    • Linear interpolation now default. New interpolation type: Basis

    • Replaces chart library with Vega, can be disabled using the ENABLE_VEGA_CHARTS=false flag.

    • Control widget styling directly from dashboards

    • Chart styling support (Pie, Bar)

Humio Server 1.10.5 LTS (2020-06-09)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.10.5LTS2020-06-09

Cloud

2021-04-30No1.8.5No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.10.0, 1.10.1, 1.10.2, 1.10.3, 1.10.4

Bug Fixes: humio-search-all and Query Timeouts

Fixed in this release

  • Summary

    • A couple of memory leaks have been found and fixed.

    • New metric: "query-delta-cost": 30s delta cost on queries per repo, for the entire cluster.

    • Fixed an issue where a query could get a "Result is partial" warning when the query took more then 15 minutes to complete while a merge of segments addressed by the query happened in the background

    • Better sorting when computing query prefixes in order to reuse queries.

    • This release fixes a security issue. For more information see: Security Disclosures

    • Improvements made to speed of frontpage loading. Noticeable for customers with many repositories and groups.

    • New internal jobs logging system stats: Search for #type=humio | NonSensitive | groupby(kind) to see them.

    • Autocreate users on login when synchronizing groups with external provider.

    • An issue could result in malformed messages being put into the ingest queue. This version is able to read and skip such messages. The issue causing such malformed messages has been fixed.

    • Fixed an issue where long running queries started as part of an export, or by calls to the /query API would time out

    • Thread pools have been reorganized to require fewer threads and threads have been given new names.

    • Paging in UI. administration/Users & Permissions.

    • Memory requirements set using -XX:MaxDirectMemorySize is much lower now. Suggested value is ((#vCpu+3)/4) GB.

    • Fixed a number of issues with export and alerts in the humio-search-all repository.

    • Improved protocol within cluster for submitting queries to allow faster start of queries on huge repositories.

    • Humio search all repo interaction with alerts and notifiers.

    • Fixed a number of bugs that would cause long-running queries using join, selfJoin or selfJoinFilter to timeout or fail

  • Other

    • Dealing with missing data points in timecharts

    • Add Role Based Access Control (RBAC) to the Humio UI

    • New line interpolation options

    • Support for controlling color and title in widgets

    • Several improvements to Query Functions

    • NetFlow support extended to also support IPFIX.

    • Added humio Health Check APIs

    • Time Chart series roll-up

    • Linear interpolation now default. New interpolation type: Basis

    • Replaces chart library with Vega, can be disabled using the ENABLE_VEGA_CHARTS=false flag.

    • Control widget styling directly from dashboards

    • Chart styling support (Pie, Bar)

Humio Server 1.10.4 LTS (2020-05-29)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.10.4LTS2020-05-29

Cloud

2021-04-30No1.8.5No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.10.0, 1.10.1, 1.10.2, 1.10.3

Bug Fixes for Long-Running Queries

Fixed in this release

  • Summary

    • A couple of memory leaks have been found and fixed.

    • New metric: "query-delta-cost": 30s delta cost on queries per repo, for the entire cluster.

    • Better sorting when computing query prefixes in order to reuse queries.

    • This release fixes a security issue. For more information see: Security Disclosures

    • Improvements made to speed of frontpage loading. Noticeable for customers with many repositories and groups.

    • New internal jobs logging system stats: Search for #type=humio | NonSensitive | groupby(kind) to see them.

    • Autocreate users on login when synchronizing groups with external provider.

    • An issue could result in malformed messages being put into the ingest queue. This version is able to read and skip such messages. The issue causing such malformed messages has been fixed.

    • Thread pools have been reorganized to require fewer threads and threads have been given new names.

    • Paging in UI. administration/Users & Permissions.

    • Memory requirements set using -XX:MaxDirectMemorySize is much lower now. Suggested value is ((#vCpu+3)/4) GB.

    • Improved protocol within cluster for submitting queries to allow faster start of queries on huge repositories.

    • Humio search all repo interaction with alerts and notifiers.

    • Fixed a number of bugs that would cause long-running queries using join, selfJoin or selfJoinFilter to timeout or fail

  • Other

    • Dealing with missing data points in timecharts

    • Add Role Based Access Control (RBAC) to the Humio UI

    • New line interpolation options

    • Support for controlling color and title in widgets

    • Several improvements to Query Functions

    • NetFlow support extended to also support IPFIX.

    • Added humio Health Check APIs

    • Time Chart series roll-up

    • Linear interpolation now default. New interpolation type: Basis

    • Replaces chart library with Vega, can be disabled using the ENABLE_VEGA_CHARTS=false flag.

    • Control widget styling directly from dashboards

    • Chart styling support (Pie, Bar)

Humio Server 1.10.3 LTS (2020-05-20)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.10.3LTS2020-05-20

Cloud

2021-04-30No1.8.5No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.10.0, 1.10.1, 1.10.2

Bug Fixes

Fixed in this release

  • Summary

    • A couple of memory leaks have been found and fixed.

    • New metric: "query-delta-cost": 30s delta cost on queries per repo, for the entire cluster.

    • Better sorting when computing query prefixes in order to reuse queries.

    • This release fixes a security issue. For more information see: Security Disclosures

    • Improvements made to speed of frontpage loading. Noticeable for customers with many repositories and groups.

    • New internal jobs logging system stats: Search for #type=humio | NonSensitive | groupby(kind) to see them.

    • Autocreate users on login when synchronizing groups with external provider.

    • An issue could result in malformed messages being put into the ingest queue. This version is able to read and skip such messages. The issue causing such malformed messages has been fixed.

    • Thread pools have been reorganized to require fewer threads and threads have been given new names.

    • Paging in UI. administration/Users & Permissions.

    • Memory requirements set using -XX:MaxDirectMemorySize is much lower now. Suggested value is ((#vCpu+3)/4) GB.

    • Improved protocol within cluster for submitting queries to allow faster start of queries on huge repositories.

    • Humio search all repo interaction with alerts and notifiers.

  • Other

    • Dealing with missing data points in timecharts

    • Add Role Based Access Control (RBAC) to the Humio UI

    • New line interpolation options

    • Support for controlling color and title in widgets

    • Several improvements to Query Functions

    • NetFlow support extended to also support IPFIX.

    • Added humio Health Check APIs

    • Time Chart series roll-up

    • Linear interpolation now default. New interpolation type: Basis

    • Replaces chart library with Vega, can be disabled using the ENABLE_VEGA_CHARTS=false flag.

    • Control widget styling directly from dashboards

    • Chart styling support (Pie, Bar)

Humio Server 1.10.2 LTS (2020-05-19)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.10.2LTS2020-05-19

Cloud

2021-04-30No1.8.5No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.10.0, 1.10.1

Optimizations, Improved Humio Health Insights and Bug Fixes

Fixed in this release

  • Summary

    • A couple of memory leaks have been found and fixed.

    • New metric: "query-delta-cost": 30s delta cost on queries per repo, for the entire cluster.

    • Better sorting when computing query prefixes in order to reuse queries.

    • This release fixes a security issue. For more information see: Security Disclosures

    • Improvements made to speed of frontpage loading. Noticeable for customers with many repositories and groups.

    • New internal jobs logging system stats: Search for #type=humio | NonSensitive | groupby(kind) to see them.

    • Autocreate users on login when synchronizing groups with external provider.

    • Thread pools have been reorganized to require fewer threads and threads have been given new names.

    • Paging in UI. administration/Users & Permissions.

    • Memory requirements set using -XX:MaxDirectMemorySize is much lower now. Suggested value is ((#vCpu+3)/4) GB.

    • Improved protocol within cluster for submitting queries to allow faster start of queries on huge repositories.

    • Humio search all repo interaction with alerts and notifiers.

  • Other

    • Dealing with missing data points in timecharts

    • Add Role Based Access Control (RBAC) to the Humio UI

    • New line interpolation options

    • Support for controlling color and title in widgets

    • Several improvements to Query Functions

    • NetFlow support extended to also support IPFIX.

    • Added humio Health Check APIs

    • Time Chart series roll-up

    • Linear interpolation now default. New interpolation type: Basis

    • Replaces chart library with Vega, can be disabled using the ENABLE_VEGA_CHARTS=false flag.

    • Control widget styling directly from dashboards

    • Chart styling support (Pie, Bar)

Humio Server 1.10.1 LTS (2020-05-04)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.10.1LTS2020-05-04

Cloud

2021-04-30No1.8.5No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.10.0

Optimizations, Improved Humio Health Insights and Bug Fixes

Fixed in this release

  • Summary

    • New metric: "query-delta-cost": 30s delta cost on queries per repo, for the entire cluster.

    • This release fixes a security issue. For more information see: Security Disclosures

    • New internal jobs logging system stats: Search for #type=humio | NonSensitive | groupby(kind) to see them.

    • Thread pools have been reorganized to require fewer threads and threads have been given new names.

    • Memory requirements set using -XX:MaxDirectMemorySize is much lower now. Suggested value is ((#vCpu+3)/4) GB.

    • Improved protocol within cluster for submitting queries to allow faster start of queries on huge repositories.

  • Other

    • Dealing with missing data points in timecharts

    • Add Role Based Access Control (RBAC) to the Humio UI

    • New line interpolation options

    • Support for controlling color and title in widgets

    • Several improvements to Query Functions

    • NetFlow support extended to also support IPFIX.

    • Added humio Health Check APIs

    • Time Chart series roll-up

    • Linear interpolation now default. New interpolation type: Basis

    • Replaces chart library with Vega, can be disabled using the ENABLE_VEGA_CHARTS=false flag.

    • Control widget styling directly from dashboards

    • Chart styling support (Pie, Bar)

Humio Server 1.10.0 LTS (2020-04-27)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.10.0LTS2020-04-27

Cloud

2021-04-30No1.8.5Yes

Hide file hashes

Show file hashes

UI for Role Based Access Control (RBAC), Health Check API, Kafka Version Update, Vega Charts. This release promotes the 1.9 releases from preview to stable. This release is identical to 1.9.3 apart from the version string. To see more details go through the individual 1.9.x release notes (links in the changelog).

This release fixes a number of security issues. For more information see: Security Disclosures.

Updated Humio to use Kafka 2.4. Humio can still use versions of Kafka down through 1.1. Be aware that updating Kafka also requires you to update ZooKeeper to 3.5.6. There is a migration involved in updating ZooKeeper. See the ZooKeeper migration FAQ here. Use the migration approach using an empty snapshot. The other proposed solution can loose data.

Updated Kafka and ZooKeeper Docker images to use Kafka 2.4. Updating to Kafka 2.4 should be straightforward using Humio's Kafka/ZooKeeper Docker images. ZooKeeper image will handle migration. Stop all Kafka nodes. Stop all ZooKeeper nodes. Start all ZooKeeper nodes on the new version. Start all Kafka nodes on the new version. Before updating Kafka/ZooKeeper, we recommend backing up the ZooKeeper data directory. Then, add the ZooKeeper properties described below. If you are deploying Kafka/ZooKeeper using other tools, for example Ansible scripts, be aware there is a migration involved in updating ZooKeeper.

When updating Kafka/ZooKeeper we recommend setting these ZooKeeper properties

ini
# Do not start the new admin server. Default port 8080 conflicts with Humio port.
admin.enableServer=false
# purge old snapshot files
autopurge.purgeInterval=1
# Allow 4 letter commands. Used by Humio to get info about the ZooKeeper cluster
4lw.commands.whitelist=*

Fixed in this release

  • Other

    • Dealing with missing data points in timecharts

    • Add Role Based Access Control (RBAC) to the Humio UI

    • New line interpolation options

    • Support for controlling color and title in widgets

    • Several improvements to Query Functions

    • NetFlow support extended to also support IPFIX.

    • Added humio Health Check APIs

    • Time Chart series roll-up

    • Linear interpolation now default. New interpolation type: Basis

    • Replaces chart library with Vega, can be disabled using the ENABLE_VEGA_CHARTS=false flag.

    • Control widget styling directly from dashboards

    • Chart styling support (Pie, Bar)

Humio Server 1.9.3 GA (2020-04-22)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.9.3GA2020-04-22

Cloud

2021-04-30No1.8.5No

Available for download two days after release.

Hide file hashes

Show file hashes

Security Fixes, Bug Fixes, and timeChart() improvements

A few security vulnerabilities have been discovered as part of a proactive penetration test. None are known to have been exploited. More information will be forthcoming.

Fixed in this release

  • Functions

  • Other

    • New Time Chart interpolation options.

    • New options for dealing with missing data in Time Charts.

    • Improve disk space monitoring when using bucket storage.

    • api-explorer not working due to CSP inline script.

    • the query metric only measured time for streaming queries, now it includes non-streaming as well.

    • The segment queue length metric was not correct when segments got fetched from bucket storage by a query.

    • If at startup the global-snapshot.json file is missing, then try loading the ".1" backup copy.

    • Improves responsiveness of the recent queries dropdown, and limits the number of stored recent queries to .0 per user per repository.

    • Allow dots in tagged field names.

    • Styling improvements in the "Style" panel for widgets.

    • Security: [critical] Fixed more security vulnerabilities discovered through proactive penetration testing (more information will be forthcoming).

    • Allow more concurrent processing to take place in "export" query processing.

Improvement

  • Dashboards and Widgets

    • Deal with Missing Data Points in Timecharts

      This release improves the handling of missing data points in time charts. Previously you could either interpolate missing data points based on the surrounding data, or leave gaps in the charts. With the introduction of the new charts in 1.9.0 the gaps became more apparent than previously, and we have added new options to deal with missing data points. These replace the previous option "Allow Gaps", with four new options:

      • Do Nothing - This will leave gaps in your data

      • Linear Interpolation - Impute values using linear interpolation based on the nearest known data points.

      • Replace by Mean Value - Replace missing values with the mean value of the series.

      • Replace by Zero - Replace missing values with zeros.

      The release also introduces new options for line interpolation.

      • Monotone

      • Natural

      • Cardinal

      • Catmull-Rom

      • Bundle

      The latter three are impacted by the 'tension' setting in the timechart Style editor.

Humio Server 1.9.2 GA (2020-03-25)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.9.2GA2020-03-25

Cloud

2021-04-30No1.8.5No

Available for download two days after release.

Hide file hashes

Show file hashes

Security Fix and Bug Fixes

Fixed in this release

  • Summary

    • Added API to the list and deleted missing segments from global.

    • Security: [critical] Fixed a security vulnerability discovered through proactive penetration testing (more information will be forthcoming).

Humio Server 1.9.1 GA (2020-03-23)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.9.1GA2020-03-23

Cloud

2021-04-30No1.8.5No

Available for download two days after release.

Hide file hashes

Show file hashes

Security Fix and Bug Fixes

Fixed in this release

  • Summary

    • This is a critical update. Self-hosted systems with access for non-trusted users should upgrade immediately. We will follow up with more details when this update has been rolled out.

    • the health-check failed-http-status-check would get stuff in warn state, this has now been fixed

Humio Server 1.9.0 GA (2020-03-12)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.9.0GA2020-03-12

Cloud

2021-04-30No1.8.5Yes

Available for download two days after release.

Hide file hashes

Show file hashes

UI for Role Based Access Control (RBAC), Health Check API, Kafka Version Update, Vega Charts

Fixed in this release

  • Summary

    • Now, you can click Edit Styling in the widget menu and modify styling directly from the dashboard view.

    • Improved (reduced) memory consumption for live groupby, and for groupby involving many distinct keys.

    • Since charts are such a central feature, we allow disabling the new implementation of widgets if you are experiencing issues with them. You can disable Vega charts globally using the ENABLE_VEGA_CHARTS=false flag.

    • This version replaces our chart library with Vega. The goal is to create a better, customizable, and more interactive charting experience in Humio. This first iteration is largely just a feature replacement for the existing functionality, with a few exceptions

    • You can now style your pie charts, and they will default to having a center radius (actually making them donuts!).

    • You can now style your bar charts to control things like label position and colors.

    • Queries involving join can now be with 'used export to file' and the /query HTTP endpoint.

    • Role Based Access Control (RBAC) through the UI is now the only permission model in Humio. Please see the Manage users & permissions documentation for more information.

    • To prevent the charts from getting cluttered, you can adjust the maximum number of series that should be shown in the chart. Any series that are not part of the top-most series will be summed together and added to a new series called Other.

    • Be aware that updating Kafka also requires you to update ZooKeeper to 3.5.6. There is a migration involved in updating ZooKeeper. See the ZooKeeper migration FAQ here. Use the migration approach using an empty snapshot. The other proposed solution can loose data.

    • Humio's NetFlow support has been extended to also support IPFIX. See Humio's documentation for NetFlow Log Format.

    • Each chart type now supports assigning colors to specific series. This will allow you to, for instance, assign red to errors and green to non-errors.

    • Updated Kafka and ZooKeeper Docker images to use Kafka 2.4. Updating to Kafka 2.4 should be straightforward using Humio's Kafka/ZooKeeper Docker images. ZooKeeper image will handle migration. Stop all Kafka nodes. Stop all ZooKeeper nodes. Start all ZooKeeper nodes on the new version. Start all Kafka nodes on the new version. Before updating Kafka/ZooKeeper, we recommend backing up the ZooKeeper data directory. Then, add the ZooKeeper properties described below. If you are deploying Kafka/ZooKeeper using other tools, for example Ansible scripts, be aware there is a migration involved in updating ZooKeeper.

    • Linear interpolation is now the default, and we have added a new type of interpolation: Basis.

    • When updating Kafka/ZooKeeper we recommend setting these ZooKeeper properties

      • Do not start the new admin server. Default port 8080 conflicts with Humio port.admin.enableServer=false

      • purge old snapshot files autopurge.purgeInterval=1

      • Allow 4 letter commands. Used by Humio to get info about the ZooKeeper cluster 4lw.commands.whitelist=*

    • You can find the series configuration controls in the Style tab of the Search page.

    • The overall health of a Humio system is determined by a set of individual health checks. For more information about individual checks see the Health Checks page and the Health Check API page.

    • Updated Humio to use Kafka 2.4. Humio can still use versions of Kafka down through 1.1.

  • Functions

Humio Server 1.8.9 LTS (2020-03-25)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.8.9LTS2020-03-25

Cloud

2021-01-31No1.6.10No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.8.0, 1.8.1, 1.8.2, 1.8.3, 1.8.4, 1.8.5, 1.8.6, 1.8.7, 1.8.8

Security Fixes

Fixed in this release

  • Summary

    • TCP socket ingest listener would spend a lot of CPU when connected but not receiving any data.

    • "Export" queries could hit an internal limit and fail for large datasets.

    • Lower ingest queue timeout threshold from 90 to 30 seconds.

    • Major changes: (see 1.7.0 release notes)

    • Fix more scrolling issues in Chrome 80 and above.

    • When a node was missing for an extended period of time the remaining nodes would create smaller segment files than they should.

    • Fix edge case errors in the regex engine. Some case insensitive searches for some Unicode characters were not supported correctly.

    • Fix scrolling issue in Chrome 80 on the Search Page.

    • Other changes: (see 1.7.1, 1.7.2, 1.7.3, and 1.7.4 release notes)

    • New feature for ephemeral servers: Let ZooKeeper assign the UUID that in turn assigns the node ID in the cluster. This is turned on by setting the config option ZOOKEEPER_URL_FOR_NODE_UUID to the set of ZooKeepers to use for this. The option ZOOKEEPER_PREFIX_FOR_NODE_UUID (default /humio_autouuid_) sets the prefix to allow rack awareness. Note: Do not turn this on for an existing cluster. Do not turn on if running older 1.7.x or 1.8.x builds.

    • Avoid calling fallocate on platforms that do not support this (for example, ZFS).

    • The ability to use Bucket Storage providers such as S3 and Google Cloud Storage for data storage.

    • Note: Do not install the Kafka, ZooKeeper or "single" Docker images of this build. Install 1.8.7 or later.

    • Alerts and exports now work on the special view "humio-search-all".

    • Fixed a race in upload of segment files for systems set up using ephemeral disks.

    • The Kafka and ZooKeeper images tagged with "1.8.6" were partially upgraded to Kafka 2.4.0.

    • Bucket storage download could report "download completed" also in case of problems fetching the file.

    • Fix security problem. This is a critical update. On-prem system with access for non-trusted users should upgrade. We follow up with more details when this update has been rolled out.

    • When a merge of segment files fails, delete the tmp-file that was created.

    • Assigning ingest tokens to parsers in sandbox repos.

    • The new feature for ephemeral servers using ZooKeeper to UUID did not properly reconnect when the network failed.

    • Security: [critical] Fixed a security vulnerability discovered through proactive penetration testing (more information will be forthcoming).

    • Query Quotas limits on the amount of resources a given user can spend. Besides those, there are a number of UI improvements, back-end improvements, and Bug Fixes.

  • Functions

Humio Server 1.8.8 LTS (2020-03-23)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.8.8LTS2020-03-23

Cloud

2021-01-31No1.6.10No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.8.0, 1.8.1, 1.8.2, 1.8.3, 1.8.4, 1.8.5, 1.8.6, 1.8.7

Security Fixes

Fixed in this release

  • Summary

    • TCP socket ingest listener would spend a lot of CPU when connected but not receiving any data.

    • "Export" queries could hit an internal limit and fail for large datasets.

    • Lower ingest queue timeout threshold from 90 to 30 seconds.

    • Major changes: (see 1.7.0 release notes)

    • Fix more scrolling issues in Chrome 80 and above.

    • When a node was missing for an extended period of time the remaining nodes would create smaller segment files than they should.

    • Fix edge case errors in the regex engine. Some case insensitive searches for some Unicode characters were not supported correctly.

    • Fix scrolling issue in Chrome 80 on the Search Page.

    • Other changes: (see 1.7.1, 1.7.2, 1.7.3, and 1.7.4 release notes)

    • New feature for ephemeral servers: Let ZooKeeper assign the UUID that in turn assigns the node ID in the cluster. This is turned on by setting the config option ZOOKEEPER_URL_FOR_NODE_UUID to the set of ZooKeepers to use for this. The option ZOOKEEPER_PREFIX_FOR_NODE_UUID (default /humio_autouuid_) sets the prefix to allow rack awareness. Note: Do not turn this on for an existing cluster. Do not turn on if running older 1.7.x or 1.8.x builds.

    • Avoid calling fallocate on platforms that do not support this (for example, ZFS).

    • The ability to use Bucket Storage providers such as S3 and Google Cloud Storage for data storage.

    • Note: Do not install the Kafka, ZooKeeper or "single" Docker images of this build. Install 1.8.7 or later.

    • Alerts and exports now work on the special view "humio-search-all".

    • Fixed a race in upload of segment files for systems set up using ephemeral disks.

    • The Kafka and ZooKeeper images tagged with "1.8.6" were partially upgraded to Kafka 2.4.0.

    • Bucket storage download could report "download completed" also in case of problems fetching the file.

    • Fix security problem. This is a critical update. On-prem system with access for non-trusted users should upgrade. We follow up with more details when this update has been rolled out.

    • When a merge of segment files fails, delete the tmp-file that was created.

    • Assigning ingest tokens to parsers in sandbox repos.

    • The new feature for ephemeral servers using ZooKeeper to UUID did not properly reconnect when the network failed.

    • Query Quotas limits on the amount of resources a given user can spend. Besides those, there are a number of UI improvements, back-end improvements, and Bug Fixes.

  • Functions

Humio Server 1.8.7 LTS (2020-03-12)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.8.7LTS2020-03-12

Cloud

2021-01-31No1.6.10No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.8.0, 1.8.1, 1.8.2, 1.8.3, 1.8.4, 1.8.5, 1.8.6

Bug Fixes

Fixed in this release

  • Summary

    • TCP socket ingest listener would spend a lot of CPU when connected but not receiving any data.

    • "Export" queries could hit an internal limit and fail for large datasets.

    • Lower ingest queue timeout threshold from 90 to 30 seconds.

    • Major changes: (see 1.7.0 release notes)

    • Fix more scrolling issues in Chrome 80 and above.

    • When a node was missing for an extended period of time the remaining nodes would create smaller segment files than they should.

    • Fix edge case errors in the regex engine. Some case insensitive searches for some Unicode characters were not supported correctly.

    • Fix scrolling issue in Chrome 80 on the Search Page.

    • Other changes: (see 1.7.1, 1.7.2, 1.7.3, and 1.7.4 release notes)

    • New feature for ephemeral servers: Let ZooKeeper assign the UUID that in turn assigns the node ID in the cluster. This is turned on by setting the config option ZOOKEEPER_URL_FOR_NODE_UUID to the set of ZooKeepers to use for this. The option ZOOKEEPER_PREFIX_FOR_NODE_UUID (default /humio_autouuid_) sets the prefix to allow rack awareness. Note: Do not turn this on for an existing cluster. Do not turn on if running older 1.7.x or 1.8.x builds.

    • Avoid calling fallocate on platforms that do not support this (for example, ZFS).

    • The ability to use Bucket Storage providers such as S3 and Google Cloud Storage for data storage.

    • Note: Do not install the Kafka, ZooKeeper or "single" Docker images of this build. Install 1.8.7 or later.

    • Alerts and exports now work on the special view "humio-search-all".

    • Fixed a race in upload of segment files for systems set up using ephemeral disks.

    • The Kafka and ZooKeeper images tagged with "1.8.6" were partially upgraded to Kafka 2.4.0.

    • Bucket storage download could report "download completed" also in case of problems fetching the file.

    • When a merge of segment files fails, delete the tmp-file that was created.

    • Assigning ingest tokens to parsers in sandbox repos.

    • The new feature for ephemeral servers using ZooKeeper to UUID did not properly reconnect when the network failed.

    • Query Quotas limits on the amount of resources a given user can spend. Besides those, there are a number of UI improvements, back-end improvements, and Bug Fixes.

  • Functions

Humio Server 1.8.6 LTS (2020-03-09)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.8.6LTS2020-03-09

Cloud

2021-01-31No1.6.10No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.8.0, 1.8.1, 1.8.2, 1.8.3, 1.8.4, 1.8.5

Fixes bug related assigning ingest tokens in a Sandbox.

Note: Do not install the Kafka, ZooKeeper or "single" Docker images of this build. Install 1.8.7 or later.

Fixed in this release

  • Summary

    • TCP socket ingest listener would spend a lot of CPU when connected but not receiving any data.

    • "Export" queries could hit an internal limit and fail for large datasets.

    • Lower ingest queue timeout threshold from 90 to 30 seconds.

    • Major changes: (see 1.7.0 release notes)

    • Fix more scrolling issues in Chrome 80 and above.

    • When a node was missing for an extended period of time the remaining nodes would create smaller segment files than they should.

    • Fix edge case errors in the regex engine. Some case insensitive searches for some Unicode characters were not supported correctly.

    • Fix scrolling issue in Chrome 80 on the Search Page.

    • Other changes: (see 1.7.1, 1.7.2, 1.7.3, and 1.7.4 release notes)

    • New feature for ephemeral servers: Let ZooKeeper assign the UUID that in turn assigns the node ID in the cluster. This is turned on by setting the config option ZOOKEEPER_URL_FOR_NODE_UUID to the set of ZooKeepers to use for this. The option ZOOKEEPER_PREFIX_FOR_NODE_UUID (default /humio_autouuid_) sets the prefix to allow rack awareness. Note: Do not turn this on for an existing cluster. Do not turn on if running older 1.7.x or 1.8.x builds.

    • Avoid calling fallocate on platforms that do not support this (for example, ZFS).

    • The ability to use Bucket Storage providers such as S3 and Google Cloud Storage for data storage.

    • Note: Do not install the Kafka, ZooKeeper or "single" Docker images of this build. Install 1.8.7 or later.

    • Fixed a race in upload of segment files for systems set up using ephemeral disks.

    • Bucket storage download could report "download completed" also in case of problems fetching the file.

    • Assigning ingest tokens to parsers in sandbox repos.

    • The new feature for ephemeral servers using ZooKeeper to UUID did not properly reconnect when the network failed.

    • Query Quotas limits on the amount of resources a given user can spend. Besides those, there are a number of UI improvements, back-end improvements, and Bug Fixes.

  • Functions

Humio Server 1.8.5 LTS (2020-02-28)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.8.5LTS2020-02-28

Cloud

2021-01-31No1.6.10No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.8.0, 1.8.1, 1.8.2, 1.8.3, 1.8.4

Bug Fixes

Fixed in this release

  • Summary

    • TCP socket ingest listener would spend a lot of CPU when connected but not receiving any data.

    • "Export" queries could hit an internal limit and fail for large datasets.

    • Lower ingest queue timeout threshold from 90 to 30 seconds.

    • Major changes: (see 1.7.0 release notes)

    • Fix more scrolling issues in Chrome 80 and above.

    • When a node was missing for an extended period of time the remaining nodes would create smaller segment files than they should.

    • Fix edge case errors in the regex engine. Some case insensitive searches for some Unicode characters were not supported correctly.

    • Fix scrolling issue in Chrome 80 on the Search Page.

    • Other changes: (see 1.7.1, 1.7.2, 1.7.3, and 1.7.4 release notes)

    • New feature for ephemeral servers: Let ZooKeeper assign the UUID that in turn assigns the node ID in the cluster. This is turned on by setting the config option ZOOKEEPER_URL_FOR_NODE_UUID to the set of ZooKeepers to use for this. The option ZOOKEEPER_PREFIX_FOR_NODE_UUID (default /humio_autouuid_) sets the prefix to allow rack awareness. Note: Do not turn this on for an existing cluster. Do not turn on if running older 1.7.x or 1.8.x builds.

    • Avoid calling fallocate on platforms that do not support this (for example, ZFS).

    • The ability to use Bucket Storage providers such as S3 and Google Cloud Storage for data storage.

    • Fixed a race in upload of segment files for systems set up using ephemeral disks.

    • Bucket storage download could report "download completed" also in case of problems fetching the file.

    • The new feature for ephemeral servers using ZooKeeper to UUID did not properly reconnect when the network failed.

    • Query Quotas limits on the amount of resources a given user can spend. Besides those, there are a number of UI improvements, back-end improvements, and Bug Fixes.

  • Functions

Humio Server 1.8.4 LTS (2020-02-19)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.8.4LTS2020-02-19

Cloud

2021-01-31No1.6.10No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.8.0, 1.8.1, 1.8.2, 1.8.3

UI Scroll Bug Fix for Chrome 80 (again). This release is purely a fix for the Humio UI. After upgrading to Chrome 80, people have been experiencing issues with scrolling in some of Humio's widgets. We did not find all the problems in the previous release.

Fixed in this release

  • Summary

    • Major changes: (see 1.7.0 release notes)

    • Fix more scrolling issues in Chrome 80 and above.

    • When a node was missing for an extended period of time the remaining nodes would create smaller segment files than they should.

    • Fix edge case errors in the regex engine. Some case insensitive searches for some Unicode characters were not supported correctly.

    • Fix scrolling issue in Chrome 80 on the Search Page.

    • Other changes: (see 1.7.1, 1.7.2, 1.7.3, and 1.7.4 release notes)

    • New feature for ephemeral servers: Let ZooKeeper assign the UUID that in turn assigns the node ID in the cluster. This is turned on by setting the config option ZOOKEEPER_URL_FOR_NODE_UUID to the set of ZooKeepers to use for this. The option ZOOKEEPER_PREFIX_FOR_NODE_UUID (default /humio_autouuid_) sets the prefix to allow rack awareness. Note: Do not turn this on for an existing cluster. Do not turn on if running older 1.7.x or 1.8.x builds.

    • Avoid calling fallocate on platforms that do not support this (for example, ZFS).

    • The ability to use Bucket Storage providers such as S3 and Google Cloud Storage for data storage.

    • Bucket storage download could report "download completed" also in case of problems fetching the file.

    • The new feature for ephemeral servers using ZooKeeper to UUID did not properly reconnect when the network failed.

    • Query Quotas limits on the amount of resources a given user can spend. Besides those, there are a number of UI improvements, back-end improvements, and Bug Fixes.

  • Functions

Humio Server 1.8.3 LTS (2020-02-13)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.8.3LTS2020-02-13

Cloud

2021-01-31No1.6.10No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.8.0, 1.8.1, 1.8.2

UI Scroll Bug Fix for Chrome 80. This release is purely a fix for the Humio UI. After upgrading to Chrome 80 people have been experiencing issues with scrolling on the Search page - specifically when the "Field" panel is visible.

Fixed in this release

  • Summary

    • Major changes: (see 1.7.0 release notes)

    • When a node was missing for an extended period of time the remaining nodes would create smaller segment files than they should.

    • Fix edge case errors in the regex engine. Some case insensitive searches for some Unicode characters were not supported correctly.

    • Fix scrolling issue in Chrome 80 on the Search Page.

    • Other changes: (see 1.7.1, 1.7.2, 1.7.3, and 1.7.4 release notes)

    • New feature for ephemeral servers: Let ZooKeeper assign the UUID that in turn assigns the node ID in the cluster. This is turned on by setting the config option ZOOKEEPER_URL_FOR_NODE_UUID to the set of ZooKeepers to use for this. The option ZOOKEEPER_PREFIX_FOR_NODE_UUID (default /humio_autouuid_) sets the prefix to allow rack awareness. Note: Do not turn this on for an existing cluster. Do not turn on if running older 1.7.x or 1.8.x builds.

    • Avoid calling fallocate on platforms that do not support this (for example, ZFS).

    • The ability to use Bucket Storage providers such as S3 and Google Cloud Storage for data storage.

    • Bucket storage download could report "download completed" also in case of problems fetching the file.

    • The new feature for ephemeral servers using ZooKeeper to UUID did not properly reconnect when the network failed.

    • Query Quotas limits on the amount of resources a given user can spend. Besides those, there are a number of UI improvements, back-end improvements, and Bug Fixes.

  • Functions

Humio Server 1.8.2 LTS (2020-02-10)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.8.2LTS2020-02-10

Cloud

2021-01-31No1.6.10Yes

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.8.0, 1.8.1

This is a bug fix release.

Fixed in this release

  • Summary

    • Major changes: (see 1.7.0 release notes)

    • When a node was missing for an extended period of time the remaining nodes would create smaller segment files than they should.

    • Fix edge case errors in the regex engine. Some case insensitive searches for some Unicode characters were not supported correctly.

    • Other changes: (see 1.7.1, 1.7.2, 1.7.3, and 1.7.4 release notes)

    • New feature for ephemeral servers: Let ZooKeeper assign the UUID that in turn assigns the node ID in the cluster. This is turned on by setting the config option ZOOKEEPER_URL_FOR_NODE_UUID to the set of ZooKeepers to use for this. The option ZOOKEEPER_PREFIX_FOR_NODE_UUID (default /humio_autouuid_) sets the prefix to allow rack awareness. Note: Do not turn this on for an existing cluster. Do not turn on if running older 1.7.x or 1.8.x builds.

    • Avoid calling fallocate on platforms that do not support this (for example, ZFS).

    • The ability to use Bucket Storage providers such as S3 and Google Cloud Storage for data storage.

    • Bucket storage download could report "download completed" also in case of problems fetching the file.

    • The new feature for ephemeral servers using ZooKeeper to UUID did not properly reconnect when the network failed.

    • Query Quotas limits on the amount of resources a given user can spend. Besides those, there are a number of UI improvements, back-end improvements, and Bug Fixes.

  • Functions

Humio Server 1.8.1 LTS (2020-02-03)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.8.1LTS2020-02-03

Cloud

2021-01-31No1.6.10Yes

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.8.0

Bug Fixes

This is a bug fix release.

Fixed in this release

  • Summary

    • Major changes: (see 1.7.0 release notes)

    • Fix edge case errors in the regex engine. Some case insensitive searches for some Unicode characters were not supported correctly.

    • Other changes: (see 1.7.1, 1.7.2, 1.7.3, and 1.7.4 release notes)

    • New feature for ephemeral servers: Let ZooKeeper assign the UUID that in turn assigns the node ID in the cluster. This is turned on by setting the config option ZOOKEEPER_URL_FOR_NODE_UUID to the set of ZooKeepers to use for this. The option ZOOKEEPER_PREFIX_FOR_NODE_UUID (default /humio_autouuid_) sets the prefix to allow rack awareness. Note: Do not turn this on for an existing cluster. Do not turn on if running older 1.7.x or 1.8.x builds.

    • Avoid calling fallocate on platforms that do not support this (for example, ZFS).

    • The ability to use Bucket Storage providers such as S3 and Google Cloud Storage for data storage.

    • Query Quotas limits on the amount of resources a given user can spend. Besides those, there are a number of UI improvements, back-end improvements, and Bug Fixes.

  • Functions

Humio Server 1.8.0 LTS (2020-01-27)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.8.0LTS2020-01-27

Cloud

2021-01-31No1.6.10Yes

Hide file hashes

Show file hashes

Joins, Bucket Storage Backend, Query Quotas, UI Improvements. This release promotes the 1.7 releases from preview to stable.

Fixed in this release

  • Summary

    • Major changes: (see 1.7.0 release notes)

    • Other changes: (see 1.7.1, 1.7.2, 1.7.3, and 1.7.4 release notes)

    • The ability to use Bucket Storage providers such as S3 and Google Cloud Storage for data storage.

    • Query Quotas limits on the amount of resources a given user can spend. Besides those, there are a number of UI improvements, back-end improvements, and Bug Fixes.

  • Functions

Humio Server 1.7.4 GA (2020-01-27)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.7.4GA2020-01-27

Cloud

2021-01-31No1.6.10No

Available for download two days after release.

Hide file hashes

Show file hashes

Bug Fixes

Fixed in this release

  • Summary

    • Allow webhook notifiers to optionally not validate certificates.

    • Allows "Force remove" of a node from a cluster.

    • Stabilized sync of uploaded files within a cluster in combination with bucket storage.

    • Add Chromium to the list of compatible browsers

    • join now accepts absolute timestamps in millis in start and end parameters.

Humio Server 1.7.3 GA (2020-01-17)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.7.3GA2020-01-17

Cloud

2021-01-31No1.6.10No

Available for download two days after release.

Hide file hashes

Show file hashes

Bug Fixes

Fixed in this release

  • Summary

    • ERROR logs get output to stderr instead of stdout to avoid breaking the potential stdout format.

    • New log output option for the LOG4J_CONFIGURATION configuration now allows the built-in: log4j2-stdout-json.xml to get the log in NDJSON format, one line for each event on stdout.

  • Functions

    • top() function allows limit up to 20.0 by default now. Used to be 1.0.

Humio Server 1.7.2 GA (2020-01-16)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.7.2GA2020-01-16

Cloud

2021-01-31No1.6.10No

Available for download two days after release.

Hide file hashes

Show file hashes

Bug Fixes

Fixed in this release

  • Summary

    • Bucket storage: Also keep copies of the "metadata files" that you use for lookup() and match() functions in the bucket and restore from there when needed.

    • USING_EPHEMERAL_DISKS allows running a cluster on disks that may be lost when the system restarts by assuming that only copies in Bucket Storage and the events in Kafka are preserved across restarts. If the filesystem remains during restart this is also okay in this mode and more efficient then fetching the files from the bucket.

    • #repo=* never matched but should always match.

    • LIVEQUERY_CANCEL_TRIGGER_DELAY_MS and LIVEQUERY_CANCEL_COST_PERCENTAGE controls canceling of live queries that have been consuming the most cost for the previous 30s when the system experiences digest latency of more than the delay. New metrics:

      • livequeries-canceled-due-to-digest-delay

      • livequeries-rate-canceled-due-to-digest-delay

      • livequeries-rate

    • Top(x, sum=y) now also support non-integer values of y (even though the internal state is still an integer value)

    • Bucket storage: Continue cleaning the old buckets after switching provider from S3 to GCP or vice versa.

    • The "query monitor" and "query quota" new share the definition of "cost points". The definition has changed in such a way that quotas saved by version up to 1.7.1 and earlier are disregarded by this (and later) versions.

    • Retention could in fail to delete obsolete files in certain cases.

    • The ZooKeeper status page now shows a warning when the commands it needs for the status page to work are not whitelisted on the ZK server.

    • New Utility inside the jar. Usage:

      java -cp humio.jar com.humio.main.DecryptAESBucketStorageFile <secret string> <encrypted file> <decrypted file>

      Allows decrypting a file that was uploaded using bucket storage outside the system.

    • Change: When the system starts with no users at all, the first user to log get root privileges inside the system.

    • LOG4J_CONFIGURATION allows a custom log4j file. Or set to one of the built-in: log4j2-stdout.xml to get the log in plain text dumped on stdout, or log4j2-stdout-json.xml to get the log in NDJSON format, one line for each event on stdout.

    • Bucket storage, GCP variant: Remove temporary files after download from GCP. Previous versions left a copy in the tmp dir.

    • Bucket storage: Support download after switching provider from S3 to GCP or vice versa.

    • Query of segments only present in a bucket now works even if disabling further uploads to bucket storage.

  • Functions

    • Restart of queries using lookup() / match() / cidr() when the uploaded file changes only worked for top-level functions, not when nested inside another function.

Humio Server 1.7.1 GA (2020-01-06)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.7.1GA2020-01-06

Cloud

2021-01-31No1.6.10Yes

Available for download two days after release.

Hide file hashes

Show file hashes

Bug Fixes and Removal of Limitations

Fixed in this release

  • Summary

    • Reuse of live dashboard queries on the humio-search-all repository did not work correctly. As an effect the number of live queries could keep increasing.

    • The Postmark integration was always assuming a humio.com from address. This has been fixed by introducing a new POSTMARK_FROM configuration parameter.

    • Remove 64 K restriction on individual fields to be parsed by parsers.

    • Saved Queries/macros was not expanded when checking if a live dashboard query could reuse an existing query.

    • Allow explicit auto as argument to the span parameter in bucket and timechart. This makes it easier to set span from a macro argument.

    • Handle large global snapshot files (larger than 2 G).

Humio Server 1.7.0 GA (2019-12-17)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.7.0GA2019-12-17

Cloud

2021-01-31No1.6.10Yes

Available for download two days after release.

Hide file hashes

Show file hashes

Join, Bucket Storage Backend, Query Quotas, UI Improvements

Humio now supports joins in the query language; the functionality is largely similar to what could previously be done by running a query, exporting it as a .csv, uploading said .csv file, and then using the match() function to filter/amend a query result. See Join search function.

Humio now supports storing segment files on Amazon S3 (and Google cloud storage) and compatible services to allow keeping more segment files than the local disks have room for and managing the local disk as a cache of these files. See Bucket Storage.

New LTS/GA Release Versioning

Stable release will have an even Minor version. If Minor is an odd number (like in this release), it is a preview release. Critical fixes will be back ported to the most recent stable release.

To make it easier to integrate with external systems Humio dashboards, can now be passed URL parameters to set the dashboard's global time interval. By passing query parameters ?time=<unix ms timestamp>&window=5m the dashboard will be opened with a 10m time window (5m before and after the the origin specified by time). The feature is not available for shared dashboards - since they do not support changing time intervals.

You can now also disable shared dashboards completely using the SHARED_DASHBOARDS_ENABLED=false configuration setting.

Fixed in this release

  • Configuration

    • Autosharding can now bet set "sticky" which means fixed as set by user on a specific (input) datasource. The API also allows listing all autosharding rules, both system-manages and sticky.

    • COMPRESSION_TYPE=high is now the default compression type. Clusters running with default configuration, will change to high compression unless the configuration COMPRESSION_TYPE=fast is set.

    • Add SHARED_DASHBOARDS_ENABLED configuration setting which allows disabling access to the shared dashboards feature - if e.g. your organization has strict security policies.

  • Dashboards and Widgets

    • UI: Allow disabling automatically searching when entering a repository search page, on a per-repo basis.

    • Top Feature: Joins allowing subqueries and joining data from multiple repositories, see Join.

    • UI: Word-wrap and event list orientation is now sticky in a session, meaning revisiting the search page will keep the previous selected options.

    • UI: The time selector on dashboards now allow panning and zooming - like the one on the search page.

    • UI: Improved Query Monitor in the administration section, making it much easier to find expensive queries. See Query Monitor.

    • UI: Add support for loading a specific time window when launching a dashboard, by setting time= and window= in the URL.

    • Queries page removed, and delete and edit saved query functionality moved into "Queries" dropdown on search page.

    • UI: Improve word-wrap and allow columns in the event list to be marked as 'autosize'. Autosizing columns will adapt to the screen size when word-wrap is enabled.

    • UI: Don't show "unexpected error" screen when Auth Token expires.

    • Top Feature: Query quotas allowing limiting how many resources users can use when searching, see Query Quotas.

    • Top Feature: The "Queries" page has been replaced with a dropdown on the Search page, that allows searching saved and recent queries.

    • Top Feature: Bucket Storage with support for S3 and Google cloud storage, see See Bucket Storage.

    • Top Feature: Query errors will now be highlighted as-you-type in on the search page.

    • UI: Ensure counts of fields and value occurrences on the event list are reliable.

    • Upgrading: After installing this version, it is not possible to roll back to a version lower than 1.6.10. Be on version 1.6.10 before upgrading to this version.

  • Functions

    • The implementation of the percentile() function has been updated to be more precise (and faster).

    • New function callFunction, allows you to call a Humio function by name. This is useful if you for instance want a dashboard where you can control what statistics your widgets show based on a parameter, e.g. timechart(function=callFunction(?statistic, field=response_time))

    • New function xml:prettyPrint()

    • The function top() has a new max=field argument, that can be used to make it work as a more efficient alias a groupby/sort combination, like top(field, max=value, limit=5) is equivalent (and much faster than) groupby(field, function=max(value)) | sort(limit=5).

    • New function json:prettyPrint()

  • Other

    • Java 13 is the recommended Java version. Docker images are now running Java 13.

    • New stable/preview release versioning scheme. See description.

    • Use case-insensitive comparison of usernames (historically an email address) when logging into Humio.

Humio Server 1.6.11 LTS (2020-01-06)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.6.11LTS2020-01-06

Cloud

2020-11-30No1.5.19No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.6.8, 1.6.9, 1.6.10

Handle Large Global Snapshot File

Breaking Changes

The following items create a breaking change in the behavior, response or operation of this release.

  • Summary

    • LDAP: It is now possible to specify an attribute within the LDAP record to use for the username rather than the default (an email address). This is only the case when using ldap-search method by specifying the LDAP_USERNAME_ATTRIBUTE in the environment. Group names when using LDAP have historically been the distinguished name (DN) for that group, it is now possible to specify and attribute in the group record for the name by setting LDAP_GROUPNAME_ATTRIBUTE. These changes necessitated a breaking change in the ldap-search code path in cases where users of Humio authenticate with a username (e.g. user) rather than an email address (e.g. user@example.com). To elicit the same behavior as previous versions of Humio simply specify the LDAP_SEARCH_DOMAIN_NAME which in the past would default to the value of LDAP_DOMAIN_NAME but no longer does.

Fixed in this release

  • Summary

    • New background job: Find segments that are too small compared to the desired sizes (from current config) and merge them into larger files. For COMPRESSION_TYPE=high this will recompress the inputs while combining them. This job runs by default.

    • Improved memory usage from having large global.

    • Require setting LDAP_SEARCH_DOMAIN_NAME explicitly when using ldap-search authentication method.

    • Segment merge could leave out some parts when merging, leading to segments not on average becoming a large as is desired.

    • Add LDAP_USERNAME_ATTRIBUTE and LDAP_GROUPNAME_ATTRIBUTE configuration settings to enable more control over names carried from LDAP into Humio.

    • Query sessions were not properly cleaned up after becoming unused. This lead to a leak causing high amount of chatter between nodes.

    • Handle large global snapshot files (larger than 2 G).

    • Detect when event ingested are more than MAX_HOURS_SEGMENT_OPEN (24h by default) old and add the tag humioBackfill to them in that case to keep "old" events from getting mixed with current "live" events.

    • Support for "sticky autosharding" and listing of current autosharding settings for all datasources in a repository.

    • Username/email is treated case-insensitive in Humio. This is more expected behavior of usernames as emails addresses are often used. In some rare occasions duplicate accounts might have been created with difference in casing and this change can trigger the otherwise dormant account to be chosen when logging in the next time. If this happens, use the administrations page to delete the unwanted user account and let the user log in again.

Humio Server 1.6.10 LTS (2019-12-12)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.6.10LTS2019-12-12

Cloud

2020-11-30No1.5.19Yes

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.6.8, 1.6.9

Bug Fixes and LDAP improvements. There are some changes to the configuration that will be required. See the change log below.

Breaking Changes

The following items create a breaking change in the behavior, response or operation of this release.

  • Summary

    • LDAP: It is now possible to specify an attribute within the LDAP record to use for the username rather than the default (an email address). This is only the case when using ldap-search method by specifying the LDAP_USERNAME_ATTRIBUTE in the environment. Group names when using LDAP have historically been the distinguished name (DN) for that group, it is now possible to specify and attribute in the group record for the name by setting LDAP_GROUPNAME_ATTRIBUTE. These changes necessitated a breaking change in the ldap-search code path in cases where users of Humio authenticate with a username (e.g. user) rather than an email address (e.g. user@example.com). To elicit the same behavior as previous versions of Humio simply specify the LDAP_SEARCH_DOMAIN_NAME which in the past would default to the value of LDAP_DOMAIN_NAME but no longer does.

Fixed in this release

  • Summary

    • New background job: Find segments that are too small compared to the desired sizes (from current config) and merge them into larger files. For COMPRESSION_TYPE=high this will recompress the inputs while combining them. This job runs by default.

    • Improved memory usage from having large global.

    • Require setting LDAP_SEARCH_DOMAIN_NAME explicitly when using ldap-search authentication method.

    • Segment merge could leave out some parts when merging, leading to segments not on average becoming a large as is desired.

    • Add LDAP_USERNAME_ATTRIBUTE and LDAP_GROUPNAME_ATTRIBUTE configuration settings to enable more control over names carried from LDAP into Humio.

    • Query sessions were not properly cleaned up after becoming unused. This lead to a leak causing high amount of chatter between nodes.

    • Detect when event ingested are more than MAX_HOURS_SEGMENT_OPEN (24h by default) old and add the tag humioBackfill to them in that case to keep "old" events from getting mixed with current "live" events.

    • Support for "sticky autosharding" and listing of current autosharding settings for all datasources in a repository.

    • Username/email is treated case-insensitive in Humio. This is more expected behavior of usernames as emails addresses are often used. In some rare occasions duplicate accounts might have been created with difference in casing and this change can trigger the otherwise dormant account to be chosen when logging in the next time. If this happens, use the administrations page to delete the unwanted user account and let the user log in again.

Humio Server 1.6.9 LTS (2019-11-25)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.6.9LTS2019-11-25

Cloud

2020-11-30No1.5.19No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.6.8

Bug Fixes and a new background job that reduces number of small files on disk. No configuration changes required, but see changes to backup in 1.6.6.

Fixed in this release

  • Summary

    • New background job: Find segments that are too small compared to the desired sizes (from current config) and merge them into larger files. For COMPRESSION_TYPE=high this will recompress the inputs while combining them. This job runs by default.

    • Improved memory usage from having large global.

    • Segment merge could leave out some parts when merging, leading to segments not on average becoming a large as is desired.

    • Detect when event ingested are more than MAX_HOURS_SEGMENT_OPEN (24h by default) old and add the tag humioBackfill to them in that case to keep "old" events from getting mixed with current "live" events.

    • Support for "sticky autosharding" and listing of current autosharding settings for all datasources in a repository.

Humio Server 1.6.8 LTS (2019-11-19)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.6.8LTS2019-11-19

Cloud

2020-11-30No1.5.19No

Hide file hashes

Show file hashes

Bug Fixes

No configuration changes required, but see changes to backup in 1.6.6.

Fixed in this release

  • Summary

    • Segment merge could leave out some parts when merging, leading to segments not on average becoming a large as is desired.

    • Support for "sticky autosharding" and listing of current autosharding settings for all datasources in a repository.

Humio Server 1.6.7 Archive (2019-11-04)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.6.7Archive2019-11-04

Cloud

2020-11-30No1.5.19No

Available for download two days after release.

Hide file hashes

Show file hashes

Bug Fixes and Performance Improvements

No configuration changes required, but see changes to backup in 1.6.6.

Fixed in this release

Humio Server 1.6.6 Archive (2019-10-23)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.6.6Archive2019-10-23

Cloud

2020-11-30No1.5.19Yes

Available for download two days after release.

Hide file hashes

Show file hashes

Bug Fixes and Performance Improvements

See changes to backup in 1.6.6.

Fixed in this release

  • Summary

    • Looking at the events for e.g. a timechart was previously untenable, due to a scrolling bug.

    • Improved error recovery in query language. This should make query error messages easier to read.

    • It is now possible to change the description for a repository or view.

    • Humio's built-in backup has been changed to delay deleting segment data from backup. By default Humio will wait 7 days from a segment file is deleted in Humio until it is deleted from backup. This is controlled using the config DELETE_BACKUP_AFTER_MILLIS. Only relevant if you are using Humio's built-in backup.

    • Performance improvements in digest pipeline.

    • In Chrome, saving a query and marking it as the default query of the repo would previously not save the default status.

Humio Server 1.6.5 Archive (2019-10-01)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.6.5Archive2019-10-01

Cloud

2020-11-30No1.5.19No

Available for download two days after release.

Hide file hashes

Show file hashes

Bug Fixes and Performance Improvements

No data migration required, but see version 1.6.3.

Fixed in this release

  • Summary

    • Redefined the event-latency metric to start measuring after parsing the events, just before inserting them into the ingest queue in Kafka. This metric is the basis of autosharding decisions and other scheduling priority choices internally and thus needs to reflect the time spent on the parts influenced by those decisions.

    • Support reading events from the ingest queue in both the format written by 1.6.3 and older and 1.6.4.

    • The new metric event-latency-repo/<repo> includes time spent parsing too and is heavily influenced by the size of the bulks of events being posted to Humio.

    • Apply the extra Kafka properties from config also on deleteRecordsBefore requests.

    • Improved performance of internal jobs calculating the data for the cluster management pages.

    • The new metric ingest-queue-latency measures the latency of events through the ingest queue in Kafka from the "send" to kafka and until it has been received by the Digest node.

Humio Server 1.6.4 Archive (2019-09-30)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.6.4Archive2019-09-30

Cloud

2020-11-30No1.5.19No

Available for download two days after release.

Bug Fixes and Performance Improvements. Retracted - did not properly support existing events in ingest queue.

No data migration required, but see version 1.6.3.

Fixed in this release

  • Summary

    • New metrics tracking number of active datasources, internal target latency of digest, number of threads available for queries, latency of live query updating and segment building, and latency of overall ingest/digest pipeline tracked for each repository.

    • /query endpoint and queryjobs endpoint now coordinate thread usage lowering the maximum total number of runnable threads from queries at any point in time.

    • Improved performance of timecharts when there are many series and timechart need to select the "top n" ones to display.

    • Creating new labels while adding labels to a dashboard did not actually show the labels as available.

    • Do not install this build. Do not roll back from this build to 1.6.3 - update to 1.6.5 instead.

    • Improved word wrap of events list.

Humio Server 1.6.3 Archive (2019-09-25)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.6.3Archive2019-09-25

Cloud

2020-11-30No1.5.19No

Available for download two days after release.

Hide file hashes

Show file hashes

Dashboard parameters improvements and Bug Fixes. Data migration is required: Hash filters need rebuilding.

Dashboard parameters can depend on each other. Fixed various small UI bugs in data table. Improvements to event list column headers.

New features and improvements

  • Other

    • File based parameters on dashboards can now filter parts of a file out, by specifying a subset of entries in the file that should be used. This filtering can also be based on other parameters, so entries pulled from the file can depend on e.g. a query based parameter.

Fixed in this release

  • Functions

    • Using dropEvent() in a parser did not work when using the "Run Tests" button.

  • Other

    • EventList column header menu opens on click now, instead of on mouse hover.

    • Exporting a search (Or using /query endpoint in other contexts) would fail if any node was down even when the files needed to satisfy the search were available on other nodes. Note that a search in progress will still fail if a node goes missing while the search runs. (Searches in the UI restart in this case but that is not possible for an export.)

    • MAX_EVENT_SIZE defaults to 1 MB. Increasing this may have adverse affects for overall system performance.

    • When setting up Humio the server will refuse to start if Kafka is not ready in the sense that the number of live Kafka brokers is less than number of Kafka bootstrap hosts in the configuration for Humio.

    • Regex matching gets rejected at runtime if it spends too many resources.

    • Improved names and states in thread dumps and added a group field to the traces. Run #type=humio class=threaddump state=RUNNABLE | timechart(group,limit=50,span=10s) in the Humio repo to get an idea of variations in what the CPU time is being spent on.

    • The Show in context window on event list would "jump" when used and a live query on dashboards.

    • Fix issue that made the timestamp column wrap on some platforms.

    • In Chrome, it was sometimes not possible to rename a dashboard, clone a dashboard, duplicate a widget, and other actions. This has been fixed.

    • LDAP login code rewritten.

    • Make JSON word-wrapping work when a column is syntax highlighted.

    • Fix issue with layout of pagination of table widgets in dashboards overflowing when it has a horizontal scroll bar

    • Latin-1 characters (those with code point 128 - 255) were not added correctly to hash filters. To fix this, Humio needs to rebuild the existing hash filters: The old hash files get deleted, and a new file prefix "hash5h3" is applied to the new files. This will be done in the background after updating to this version. For estimation of time to complete use a rate of .0GB/core/hour of original size. While rebuilding hash filter files the system will have a higher load from this and from searches that would benefit from the filters but need to run without them.

    • HASHFILTER_MAX_FILE_PERCENTAGE defaults to 50. Hash filter files that are larger than this relative to their segment file do not get created. This trades the work required to scan them on search for disk space for files that are not very large.

    • Replication of segment files among nodes now runs in multiple threads to allow faster restore form peers for a failed node.

    • Previously, exporting data from queries with parameters would always fail. This now works as expected.

    • MAX_JITREX_BACKTRACK default to 1.0.0: Limits CPU resources spent in a regex match, failing the search if exceeded.

    • Fix issue where streaming queries failed when a node in the cluster was unavailable.

    • The Event List widget no longer shows column menus on dashboards. Editing was not possible, but the menus would open anyway.

Humio Server 1.6.2 Archive (2019-09-04)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.6.2Archive2019-09-04

Cloud

2020-11-30No1.5.19No

Available for download two days after release.

Hide file hashes

Show file hashes

Event List Columns and Bug Fixes. The release replaces the event list on the search page with a table view where you can control which columns you would like to see.

Fixed in this release

  • Summary

    • The UI now only checks the version of the Humio installation when determining if it should reload dashboards.

    • Improve scheduling of uploads in S3 archiving to achieve better throughput.

    • The special handling of @display has been removed. The field is now like any other. If you use it today, you can add it as a column in your default columns.

    • If a field that you would like a column for is not present in the list of fields. You can manually add it from the toolbar of the fields panel.

    • Users are now notified about the dashboard reload 5s before reloading.

    • New Event List with customizable columns.

    • Saving a default query for your repository also saves the selected columns and will show them by default.

    • Fixed issue where some cluster nodes where configured differently than others, it would trigger a dashboard reload every 10s.

    • The default order of the events on the search page has been reversed. It is more natural to have newer events (lines) below older ones - just like logs appear in a log file. This can be changed in "Your Account".

    • Use the keyboard arrows and Enter key to quickly add and remove columns while in the "Filter Fields" textbox.

    • timechart with limit selecting top series was nondeterministic when live.

    • You can now add favorite fields to your views. These fields will always be started to the top of the fields panel, and be visible even if they are not part of the currently visible events.

    • Browser minimum versions get checked in the UI to warn if using a version known to miss required features.

    • The fields panel is open by default. You can change this in "Your Account" preferences.

  • Functions

    • New query function hashRewrite() to hide values of selected fields

    • New query function hashMatch() to be able to search (for exact value) on top of the hashed values.

Humio Server 1.6.1 Archive (2019-08-26)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.6.1Archive2019-08-26

Cloud

2020-11-30No1.5.19No

Available for download two days after release.

Hide file hashes

Show file hashes

Bug Fixes

Fixed in this release

  • Summary

    • Live queries could lock the HTTP pool, leading to a combination of high CPU and problems accessing the HTTP interface.

    • Fixed issue preventing you from clicking links in Note Widgets.

Humio Server 1.6.0 Archive (2019-08-22)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.6.0Archive2019-08-22

Cloud

2020-11-30No1.5.19No

Available for download two days after release.

Hide file hashes

Show file hashes

Improved compression. Note Widgets and YAML Template Files

Dashboard Note Widgets can include descriptions and can contain template expressions and links to external systems using the current parameter and time values

Read more about note widgets at Note Widget

We are also introducing a new YAML file format for dashboard templates. The new format is much human-readable. It is the first step to being able to persist all entities (parsers, queries, alerts) as files.

Support for the now deprecated dashboard file import API and JSON format will continue, but expect it to be removed in a later release.

Fixed in this release

  • Configuration

    • COMPRESSION_TYPE=high turns on a stronger compression when segments get merged. This results in better compression, at the expense of having slightly lower compression for the very recent events. The improvement is typically 2-3 times better compression for the merged segments.

    • COMPRESSION_TYPE=extreme uses the stronger compression also in the digest part, even though it is not as effective there due to the gain from having a larger file after the merge.

  • Functions

    • New function start() and function end() functions provides the time range being queried as fields.

    • New function urlEncode() and function urlDecode() functions allow for encoding or decoding the value of a field for use in urls.

    • The function parseJson() now accepts an exclude and include parameter. Use this to specify which fields should not be included.

  • Other

    • New function copyEvent() function allows duplicating an event into another datasource while ingesting. Use case to make the two events differ.

    • COMPRESSION_TYPE=fast (Default!) corresponds to versions before 1.6.x

    • Styling of the dashboard "Labels" dropdown has been fixed.

    • Introducing a new YAML dashboard file format.

    • Pending parameter edits toggle, so that parameter changes are not immediately applied if the user desires not to.

    • Added GraphQL fields for shared dashboards.

    • Renaming a repository is now possible in settings.

    • Added cluster information pages for the ZooKeeper & Kafka Cluster used by Humio. Both are available under Administration.

    • The function sort() query function now ignores case when sorting strings.

    • The sizes of the compressed files and the associated hash-filter files are tracked separately for the merged part, allowing you to see in the UI how well the long-term compression works as part of the total set.

    • Removed internal REST API for shared dashboards.

    • Added Note Widget support for dashboards.

    • Changing Dashboard labels will no longer trigger a "Dashboard was modified remotely" notification.

    • Note! Rolling back to v1.5.x is supported only for COMPRESSION_TYPE=fast which is the default in this release. The default is expected to change to "high" later on. The new compression types "high" and "extreme" are considered BETA release.

    • This update does not support rolling updates. Stop all Humio nodes. Then start them on the new version.

    • Changed GraphQL fields for dashboard widgets.

    • Drawer heights were not being persisted between browser sessions.

Humio Server 1.5.23 Archive (2019-07-31)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.5.23Archive2019-07-31

Cloud

2020-11-30No1.5.8No

Available for download two days after release.

Hide file hashes

Show file hashes

Maintenance Build

Fixed in this release

  • Summary

    • Include size spent on hash filter files on disk in the cluster overview as Humio data rather than system data.

    • case that assigned fields inside was not handled properly when pre-filtering using the hash filters.

  • Configuration

    • CACHE_STORAGE_SOURCE defaults to both, and also allows secondary to only cache files from the secondary storage

  • Functions

    • Function collect() now requires the set of fields to collect.

Humio Server 1.5.22 Archive (2019-07-11)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.5.22Archive2019-07-11

Cloud

2020-11-30No1.5.8No

Available for download two days after release.

Hide file hashes

Show file hashes

Maintenance Build

Fixed in this release

  • Summary

    • Improved performance of "/query" endpoint.

    • There is now a humio-query-id response header on responses to "/query" search requests.

    • Always close everything when Akka actorsystem is terminated.

Humio Server 1.5.21 Archive (2019-07-04)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.5.21Archive2019-07-04

Cloud

2020-11-30No1.5.8No

Available for download two days after release.

Hide file hashes

Show file hashes

Maintenance Build

Fixed in this release

  • Summary

    • If an events gets @error=true in the ingest pipeline (including in the parser) it will also get #error=true as a tag. This makes events with an error become a separate datasource in Humio allowing you to delete them independent from the others and makes problems from parsing timestamps not disrupt the pipeline when back filling old events.

  • Functions

    • New function dropEvent() lets you discard an event in the parser pipeline. If a parser filters out events using e.g. a regex match that does not match the parser will just keep the incoming events. Use this new function (typically in a case) to explicitly drop an event while parsing it when it does not match the required format.

Humio Server 1.5.20 Archive (2019-07-04)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.5.20Archive2019-07-04

Cloud

2020-11-30No1.5.8No

Available for download two days after release.

Hide file hashes

Show file hashes

Maintenance Build

Fixed in this release

  • Summary

    • "services" is no longer a reserved repo name.

Humio Server 1.5.19 Archive (2019-07-03)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.5.19Archive2019-07-03

Cloud

2020-11-30No1.5.8No

Available for download two days after release.

Hide file hashes

Show file hashes

File based parameters on dashboards. This release makes it easier to configure load balancers by adding sticky session headers to most UI Http requests.

The existing header Humio-Query-Session is used. For non-search related HTTP requests it will contain a random sticky session ID. For search related HTTP requests it contains a hash of the query being executed - just like it has done previously.

New file based parameter type is added to dashboards.

Fixed in this release

  • Queries

    • The HTTP request header Humio-Query-Session is now added to most requests from the UI.

  • Other

    • Make failover to the next node in digest when a node gets shut down gracefully faster by delaying the shutdown a few seconds while letting the follower catch up.

    • Fixed Interval Queries on dashboards used the time of the dashboard being loaded as the definition of "now". It will now use the time of the last change in the dashboard's global time.

    • Improved performance on servers with many cores for functions (such as top) that may require large states internally.

    • New shared files. Shared files can be used like the existing files, that are uploaded to repositories. A shared file is visible in all repositories and can be used by everyone. Only root users can create and manage them. For now, shared files can only be added using Humio's Lookup API. Shared files are visible in the files tab in all repositories for all users. Root users can also edit and delete shared files there. Shared files can be used from query functions like function lookup() and function match() they are referenced using the path /shared/filename.

    • New type of parameter: Dashboards can now have file based parameters, which are populated with data from files uploaded to Humio

Humio Server 1.5.18 Archive (2019-06-26)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.5.18Archive2019-06-26

Cloud

2020-11-30No1.5.8No

Available for download two days after release.

Hide file hashes

Show file hashes

New function parseXml() and support ephemeral drives for caching.

Fixed in this release

  • Summary

    • Humio can now keep a cache of the latest files on when told the path of a cache-dir using CACHE_STORAGE_DIRECTORY. Humio will then write copies of some of the files from primary and secondary storage here, assuming it is faster to read from the cache. The cache does not need to remain after a restart of Humio. CACHE_STORAGE_PERCENTAGE (Default 90) controls how much of the available space on the drive Humio will try to use. This is useful on system such as AWS where the primary data storage is durable but slow due to being across a network (e.g. EBS) while the server also has fast NVME-drives that are ephemeral to the instance.

    • Certain regular expressions involving ^ and $ could fail to match.

    • MAX_EVENT_FIELD_COUNT (default .0) controls the enforced maximum number of fields in an even in the ingest phase.

    • New built in parser corelight-es to parse Corelight data send using the Elastic protocol.

    • Reduce size of global snapshots file.

    • Remove configuration flags: REPLICATE_REMOTE_GLOBAL_HOST and REPLICATE_REMOTE_GLOBAL_USER_TOKEN

    • Parameter input fields for query based parameters initially always showed * even when a default value was set. It now correctly shows the default value for the parameter.

  • Functions

Humio Server 1.5.17 Archive (2019-06-20)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.5.17Archive2019-06-20

Cloud

2020-11-30No1.5.8No

Available for download two days after release.

Hide file hashes

Show file hashes

Maintenance Build

Fixed in this release

  • Summary

    • Update BitBucket OAuth integration to version 2

    • Updates to repos with reserved names on legacy repos did not work.

Humio Server 1.5.16 Archive (2019-06-11)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.5.16Archive2019-06-11

Cloud

2020-11-30No1.5.8No

Available for download two days after release.

Hide file hashes

Show file hashes

Maintenance Build

Fixed in this release

  • Summary

    • On Windows, the Ctrl+O shortcut no longer opens the "jump" menu on the home page, but Ctrl+Y does instead, to avoid conflicts with browser shortcuts.

    • Ability to read global-snapshot.json when the file is larger than 1GB.

    • Invalid parser no longer prevents ingest token page from loading.

Humio Server 1.5.15 Archive (2019-06-06)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.5.15Archive2019-06-06

Cloud

2020-11-30No1.5.8No

Available for download two days after release.

Hide file hashes

Show file hashes

Dashboard Improvements and Bug Fixes

Fixed in this release

  • Summary

    • Regex with [^\W] did not execute [\w] as it should.

    • Dashboard parameters with fixed list of values now keep the order they were configured with

    • Dashboard parameters with fixed list of values can now have labels for each of the values

    • Humio metrics have now been documented.

  • Configuration

    • VALUE_DEDUP_LEVEL default to the compression level. Range is [0 ; 63]. Higher values may trade extra digest time to get lower storage of events with many fields.

  • Functions

    • New function eventFieldCount() that returns the number of fields that this event uses internally for the values, use along with eventSize() to get statistics on how your events are stored.

Humio Server 1.5.14 Archive (2019-05-29)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.5.14Archive2019-05-29

Cloud

2020-11-30No1.5.8No

Available for download two days after release.

Hide file hashes

Show file hashes

Improved Pre-Filters

Fixed in this release

  • Summary

    • The disk space occupied by the pre-filter files now get included when enforcing retention by compressed size.

    • The format of the new hash5h2 files is different from the previous bloom5h1 and the system will generate new files from scratch for all existing segment files and delete any existing bloom5h1 file.

    • Improved pre-filters to support more searches while adding less overhead in disk space.

    • When file and column parameters to the cidr() function used together, load subnet list from given CSV.

  • Automation and Alerts

    • When an alert fails to send the notification, don't restart the query, just retry the notification later.

Humio Server 1.5.13 Archive (2019-05-27)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.5.13Archive2019-05-27

Cloud

2020-11-30No1.5.8No

Available for download two days after release.

Hide file hashes

Show file hashes

Metrics are now send to a separate file humio-metrics.log.

Fixed in this release

  • Summary

    • New log file humio-metrics.log. Metrics data has been removed from the humio-debug.log and moved to humio-metrics.log. Metrics will still also be in the default Humio repository. If you are collecting the Humio log files, with for example Filebeat, you need to add humio-metrics.log to the collector.

    • Fix some cases where parameters would not be picked up by the UI because of regex or string literals in the query.

Humio Server 1.5.12 Archive (2019-05-20)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.5.12Archive2019-05-20

Cloud

2020-11-30No1.5.8No

Available for download two days after release.

Hide file hashes

Show file hashes

Parameters can be used to make dashboards and queries dynamic.

Fixed in this release

  • Summary

    • You can now use the syntax ?param in queries. This will add input boxes to the search and dashboard pages. Read more in the Manage Dashboard Parameters documentation.

    • Parallel upload of segment files to S3. Degree of parallelism can be controlled with e.g. S3_ARCHIVING_WORKERCOUNT=4. Default is 1 if nothing is specified.

    • URLs now contain parameter values making it easy to share specific dashboard configurations.

Humio Server 1.5.11 Archive (2019-05-16)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.5.11Archive2019-05-16

Cloud

2020-11-30No1.5.8No

Available for download two days after release.

Hide file hashes

Show file hashes

Bug Fix Release

Fixed in this release

  • Summary

    • Bloom filters are now always on.

    • Named groups in regular expressions supports having . [ ] in their names.

    • Moving segments to secondary storage can no longer be blocked by merging of segment files / s3 archiving.

Humio Server 1.5.10 Archive (2019-05-13)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.5.10Archive2019-05-13

Cloud

2020-11-30No1.5.8Yes

Available for download two days after release.

Hide file hashes

Show file hashes

New bloom filters, but please upgrade to 1.5.11 to avoid known problems in this build.

Fixed in this release

  • Summary

    • When enabled, this will write files along with the segment files with prefix bloom5h1., which add approximately 5% storage overhead.

    • MUST be enabled with BLOOMFILTER_ENABLED=true (Note! defaults to false in this release, which makes searches skip events they should not).

    • New experimental bloom filters that speed up searching for constant strings such as UUIDs and IP-addresses; the longer the search string, the bigger the speedup. The bloom filters also help regular expression searching, including case insensitive ones.

    • It is safe to just delete any bloom5h1. files while the system is running, or in case the feature needs to be disabled.

    • The bloom filter files will be generated as part of digest work, and also generated for "old" segment files when Humio is otherwise idle. Thus, when the feature is initially enabled, it will be visible that the CPU load is higher for a period of time.

Humio Server 1.5.9 Archive (2019-05-06)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.5.9Archive2019-05-06

Cloud

2020-11-30No1.5.8No

Available for download two days after release.

Hide file hashes

Show file hashes

Bug Fix Release

Fixed in this release

  • Summary

    • Add information on query prefixes to the Query Monitor. When inspecting a running query in the Query Monitor the query prefix can now be found in the details pane.

    • Enable sourcetype field in the HEC endpoint to choose parser (unless another parser is attached to the parser token).

    • Default filters in dashboards could cause search to not find anything.

  • Automation and Alerts

    • Alerts with multiple notifiers could result in notifications not adhering to the configured notification frequency resulting in notification spam.

Humio Server 1.5.8 Archive (2019-04-25)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.5.8Archive2019-04-25

Cloud

2020-11-30No1.4.xYes

Available for download two days after release.

Hide file hashes

Show file hashes

New dashboard editing code and many other improvements

Fixed in this release

  • Summary

    • In tableview, if column data is of the form \[Label](URL) it is displayed as Label with a link to URL.

    • Dashboard queries that are not live and uses a timeinterval relative to now, are migrated to be live queries. Going forward, queries with timeintervals relative to now will be live queries when added to dashboards.

    • S3 archiving now supports forward proxies.

    • parseTimestamp() now handles dates, e.g. 31-08-2019.

    • @source and @host is now supported for Filebeat v7.

    • The Auth0 integration now supports importing Auth0-defined roles. New config AUTH0_ROLES_KEY identifies the name of the role attribute coming in the AWT token from Auth0. See new auth0 config options Map Auth0 Roles.

    • Validation of bucket and region when configuring S3 archiving.

    • Alerts notifiers with standard template did not produce valid JSON.

    • Built-in audit-log parser now handles a variable number of fractions of seconds.

    • Humio's own Jitrex regular expression engine is again the default one.

  • Configuration

    • Config property KAFKA_DELETES_ALLOWED has been removed and instead DELETE_ON_INGEST_QUEUE is introduced. DELETE_ON_INGEST_QUEUE is set to true by default. When this flag is set, Humio will delete data on the Kafka ingest queue, when data has been written in Humio. If the flag is not set, Humio will not delete from the ingest queue. No matter how this flag is set, it is important to configure retention for the queue in Kafka. If Kafka is managed by Humio, Humio will set a 48hour retention when creating the queue. This defines how long data can be kept on the ingest queue and thus how much time Humio has to read the data and store it internally.

Humio Server 1.5.7 Archive (2019-04-10)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.5.7Archive2019-04-10

Cloud

2020-11-30No1.4.xNo

Available for download two days after release.

Hide file hashes

Show file hashes

Bug Fix Release

Fixed in this release

  • Summary

    • Temporarily disable deletes of events from the ingest queue to allow recovering events skipped in the queue due to the above infinite loop problem.

    • Revert default Regex engine from jitrex to RE2J. jitrex has a case where it may loop infinitely and this will break the digest pipeline if this happens in a live query.

Humio Server 1.5.6 Archive (2019-04-04)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.5.6Archive2019-04-04

Cloud

2020-11-30No1.4.xNo

Available for download two days after release.

Hide file hashes

Show file hashes

Bug Fix Release

Fixed in this release

  • Summary

    • LDAP integration.

Humio Server 1.5.5 Archive (2019-04-03)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.5.5Archive2019-04-03

Cloud

2020-11-30No1.4.xNo

Available for download two days after release.

Hide file hashes

Show file hashes

Event Context

Fixed in this release

  • Summary

    • The size of the Drawer, showing event details on the search page, is remembered (by being saved to local storage)

    • New event context searches let users select and search around one specific event.

    • Segment merging could reuse a tmp file when the system was restarted, which would block the merging process on that host from making progress.

    • Fix bug in regex not recognizing [0-9] as part of \w

    • Restart all relevant queries when an uploaded file gets changed. This allows live queries and alerts to refresh using the latest version of the file.

    • Live timecharts could accumulate data for 2 buckets instead of 1 into the bucket that was right-most when the charts starts.

    • A GET on /api/v1/users that lists all known users on the system no longer includes information on the repositories for the user, as that made it too slow.

    • Display information on disk space usage of primary / secondary storage location in cluster management UI.

    • Uploaded files cannot be bigger than specified in the config MAX_FILEUPLOAD_SIZE. Default value is .0 megabytes. The default value is used in our cloud.

  • Configuration

Humio Server 1.5.4 Archive (2019-03-26)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.5.4Archive2019-03-26

Cloud

2020-11-30No1.4.xNo

Available for download two days after release.

Hide file hashes

Show file hashes

Bug Fix Release

Fixed in this release

  • Summary

    • Query scheduler could get into a state of doing no work when overloaded during startup. Workaround while working on proper solution: Raise the queue size internally.

Humio Server 1.5.3 Archive (2019-03-26)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.5.3Archive2019-03-26

Cloud

2020-11-30No1.4.xNo

Available for download two days after release.

Hide file hashes

Show file hashes

Bug Fix Release

Fixed in this release

  • Summary

    • Webhook request could end up with malformed requests.

    • New Config flag WARN_ON_INGEST_DELAY_MILLIS. How much should the ingest delay fall behind before a warning is shown in the search UI. Default is 30 seconds.

Humio Server 1.5.2 Archive (2019-03-25)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.5.2Archive2019-03-25

Cloud

2020-11-30No1.4.xNo

Available for download two days after release.

Hide file hashes

Show file hashes

New functions for Math and Time operations.

Fixed in this release

  • Summary

    • Query prefixes for users were not properly applied to the "export" api.

    • Repositories with names starting with "api" were inaccessible.

    • Date picker marked the wrong day as current.

    • New functions for "Math" and "Time" operations.

    • Java version check: Allow JDK-11 and 12.

Humio Server 1.5.1 Archive (2019-03-22)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.5.1Archive2019-03-22

Cloud

2020-11-30No1.4.xNo

Available for download two days after release.

Hide file hashes

Show file hashes

The default regex engine has been replaced.

Fixed in this release

  • Summary

    • New regex engine (Humio jitrex) is now the default; configure using DEFAULT_USER_INPUT_REGEX_ENGINE=HUMIO|RE2J. If you experience issues with regular expressions try setting configuration back to the previous default RE2J. You can also pass the special flags /.../G (for Google re2j) or /.../H (for Humio jitrex) to compare.

    • Timechart more efficient in the backend, better supporting more than 1 series.

    • New implementation backing match(...) for exact matching (glob=false) allows using .csv files up to 1 million lines. The limit for exact match state size can be set using EXACT_MATCH_LIMIT=.0.0.

    • No owls were hurt in the production of this release.

    • Kill query or blacklist query was not always killed on all nodes.

Humio Server 1.5.0 Archive (2019-03-15)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.5.0Archive2019-03-15

Cloud

2020-11-30No1.4.xNo

Available for download two days after release.

Hide file hashes

Show file hashes

All parsers are now written in Humio's query language.

Fixed in this release

  • Summary

    • New BETA feature: Delete Events allows deleting a set of events from the internal store using a filter query and a time range. At this point there is only API (GraphQL and REST) for this but no UI.

    • The option 'PARSE NESTED JSON' on the old json parser creation page is no longer available/supported. Instead use parseJson() on specific fields, e.g. parseJson() | parseJson(field=foo). This has to be done manually for migrated JSON parsers.

    • permission for editing retention when running with ENFORCE_AUDITABLE=true.

    • Migrated regex parsers with the option 'PARSE KEY VALUES' enabled has different parse semantics. If the regex fails key values will no longer be extracted.

    • All parsers created before the introduction of parsers written in Humio's query language are migrated.

    • Non root users could not see sandbox data when using RBAC.

Humio Server 1.4.9 Archive (2019-03-13)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.4.9Archive2019-03-13

Cloud

2020-11-30No1.3.2No

Available for download two days after release.

Hide file hashes

Show file hashes

Bug Fix of retention not working.

Fixed in this release

  • Summary

    • Retention was not applied to all segment files in a clustered setup. The bug was introduced in 1.4.4.

    • Increasing AUTOSHARDING_MAX default from 8 to 16 and start autosharding at 4 instead of 2,

    • Prevent labels in gauge widgets from being clipped.

  • Automation and Alerts

    • white space in field templates in alerts.

Humio Server 1.4.8 Archive (2019-03-11)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.4.8Archive2019-03-11

Cloud

2020-11-30No1.3.2No

Available for download two days after release.

Hide file hashes

Show file hashes

Bug Fix Release

Fixed in this release

  • Summary

    • Bug Fix create default directories in the ZooKeeper Docker Image

    • Bug Fix introduced in last release for handling error messages

Humio Server 1.4.7 Archive (2019-03-07)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.4.7Archive2019-03-07

Cloud

2020-11-30No1.3.2Yes

Available for download two days after release.

Hide file hashes

Show file hashes

Saved queries allowed in case and match.

Fixed in this release

  • Summary

    • Humio now requires Java version 11. The docker images for Humio now include Java 11. If you run the "plain jar" you must upgrade your Java to 11.

    • Improved handling of the "Kafka reset" aka "Start from fresh Kafka" aka "Set a new topic prefix". Humio detects and properly handles starting after the user has wiped the Kafka installation, or pointed to a fresh install of Kafka.

    • Upgraded to Kafka to 2.1.1 in our Docker images and the Java client in Humio. Humio is still be compatible with older versions of Kafka. Lowest supported Kafka version is 1.1.0.

    • Saved queries now supported in case and match.

Humio Server 1.4.6 Archive (2019-03-04)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.4.6Archive2019-03-04

Cloud

2020-11-30No1.3.2No

Available for download two days after release.

Hide file hashes

Show file hashes

Bug Fix Release

Fixed in this release

  • Summary

    • Prevent 'http response splitting' attack in the "export as" function.

    • The personal sandbox was missing in list of visible repos for non-root users when READ_GROUP_PERMISSIONS_FROM_FILE was enabled.

    • ENABLE_PERSONAL_API_TOKENS defaults to true. When set to false the API tokens are no longer valid as auth-tokens.

    • When @timestamp is in the filter part of the search then let that limit the timeinterval as if selected in the Time Selector.

Humio Server 1.4.5 Archive (2019-02-27)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.4.5Archive2019-02-27

Cloud

2020-11-30No1.3.2No

Available for download two days after release.

Hide file hashes

Show file hashes

Bug Fix Release

Fixed in this release

  • Summary

    • Many background tasks now get executed only on hosts with segment storage partitions assigned, and the hosts use the storage partition assignments as the key to decide which hosts must execute the tasks, thus freeing up resources on the other hosts.

    • Shutdown of digest had a timeout of 10 seconds internally, which could lead to being dropped too soon while shutting down or restarting. This could result in ingest lag rising to over .0 seconds, where the expected lag is the time from the shutdown is initiated until a fe seconds after the new instance is started. There is a new config SHUTDOWN_ABORT_FLUSH_TIMEOUT_MILLIS which defaults to .0.0 millis (5 minutes) to allow proper shutdown also on systems with many datasources or slow filesystems / disks.

    • Timechart in "steps mode" now displays the the the right of the label instead of to the left, which matches the fact that the labels are the start time.

    • NODE_ROLES being applied in more background tasks.

Humio Server 1.4.4 Archive (2019-02-26)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.4.4Archive2019-02-26

Cloud

2020-11-30No1.3.2No

Available for download two days after release.

Hide file hashes

Show file hashes

Bug Fix Release

Fixed in this release

  • Summary

    • Making a repository or view a favorite failed on recently created items.

    • Detect if a host in the cluster is being set to have the same vHost index as this host, and exit in this case.

    • On a cluster with many segments and a node with no segments, the cluster administration page could time out.

    • Repository statistics displayed on frontpage were out of date on servers without any digest partitions. This also made the search page display the warning "You don't have any data yet, consider the following options..." when searching until a result of the search was returned.

    • Having timezone offset larger than the span in a timechart could result in errors.

Humio Server 1.4.3 Archive (2019-02-21)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.4.3Archive2019-02-21

Cloud

2020-11-30No1.3.2No

Available for download two days after release.

Hide file hashes

Show file hashes

Improved restart of live queries.

Fixed in this release

  • Summary

    • Restart live queries if their query prefixes change when using Role based authentication and access control (RBAC).

    • Remove migration from internal data formats older than what v1.3.x writes. Do not start this version without having upgraded to 1.3.2 or 1.4.x first.

    • Improved restart performance to better support restarting (or upgrading) the servers in a large cluster with large amounts of data.

    • Humio's UI is programmed in Elm and we upgraded to use Elm 0.19.

  • Configuration

    • NODE_ROLES with current options being "all" or "httponly". The latter allows the node to avoid spending cpu time on tasks that are irrelevant to a nodes that has never had any local segments files and that will never be any assigned segments either.

Humio Server 1.4.2 Archive (2019-02-19)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.4.2Archive2019-02-19

Cloud

2020-11-30No1.3.2No

Available for download two days after release.

Hide file hashes

Show file hashes

Minor release. Improve restarting of queries and Ingest listener performance.

Fixed in this release

  • Summary

    • Bug Fix ldap authentication code.

Humio Server 1.4.1 Archive (2019-02-18)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.4.1Archive2019-02-18

Cloud

2020-11-30No1.3.2No

Available for download two days after release.

Hide file hashes

Show file hashes

Minor release. Improve restarting of queries and Ingest listener performance.

Fixed in this release

  • Summary

    • Improved restarting searches when a node go away.

    • Improved Ingest listener performance. One socket can more throughput than before.

Humio Server 1.4.0 Archive (2019-02-14)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.4.0Archive2019-02-14

Cloud

2020-11-30No1.3.2No

Available for download two days after release.

Hide file hashes

Show file hashes

High availability for ingest and digest.

Fixed in this release

  • Summary

    • Emphasis is on efficiency during normal operation over being efficient in the failure cases: After failure the cluster will need some time to recover during which ingested events will get delayed. The cluster needs to have ample cpu to catch up after such a fail-over. There are both new and reinterpreted configuration options in the config environment for controlling how the segments get build for this.

    • Digest partitions can now be assigned to more than one host. Doing so enables the cluster to continue digesting incoming events if a single host is lost from the cluster.

    • Segments are flushed after 30 minutes. This makes S3 archiving likely to be less than 40 minutes after the incoming stream.

    • Clone existing dashboard when creating from the frontpage was broken.

    • If rolling back, make sure to roll back to version 1.3.2+

  • Functions

    • Limit match() / lookup() functions to 20.0 rows or whatever MAX_STATE_LIMIT is set to.

Humio Server 1.3.2 Archive (2019-02-12)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.3.2Archive2019-02-12

Cloud

2020-11-30No1.3.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Allow an alert to have multiple notifiers, e.g. both Slack and PagerDuty

New features and improvements

  • Automation and Alerts

    • Allow an alert to have multiple notifiers, e.g. both Slack and PagerDuty.

Fixed in this release

  • Summary

    • Bar charts got incorrect height.

    • Sandbox permissions for the owner of the sandbox.

    • HEC ingest of array of numbers fixed.

Humio Server 1.3.1 Archive (2019-02-08)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.3.1Archive2019-02-08

Cloud

2020-11-30No1.3.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Bug Fixes in addition to a new permission model.

Fixed in this release

  • Summary

    • LDAP changes being rolled back to allow users to login using just their username again.

Humio Server 1.3.0 Archive (2019-02-07)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.3.0Archive2019-02-07

Cloud

2020-11-30No1.2.0Yes

Available for download two days after release.

Hide file hashes

Show file hashes

New permission model

Fixed in this release

  • Summary

    • Metric of type=HISTOGRAM in the internal "humio-metrics" repo had all values a factor of 10^6 too low.

    • New permission model used for Role Based Access Control is now in use all the time. Default setup includes the roles member, admin, and eliminator as usual.

    • LDAP fix; may require users to login with full user@domain user name, not just user.

    • The config for RBAC has changed (config file has a new name, environment variable names have changed).

  • Functions

    • worldMap() function forgot about normalize option.

Humio Server 1.2.12 Archive (2019-02-05)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.2.12Archive2019-02-05

Cloud

2020-11-30No1.2.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Optimizations and Bug Fixes

Fixed in this release

  • Summary

    • Running queries now show only top-.0 queries to avoid overloading the browser in case of many queries.

    • Added a timechart of bulk size to built-in dashboard "Humio stats".

    • Optimizing for many datasources in a repo by removing a bottleneck related for "tag grouping" auto-detection.

    • Improvements to Query Monitor

  • Functions

    • lowercase() function now preserves unmodified fields in the "include=both" case, and no longer modifies "@timezone".

Humio Server 1.2.11 Archive (2019-01-31)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.2.11Archive2019-01-31

Cloud

2020-11-30No1.2.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Support for non-loadbalanced queries, optimizations and Bug Fixes

Fixed in this release

  • Summary

    • When your query matches more than .0 events, you can now scroll further back in time than those .0, "paging" through the older events. This works for any "non-aggregate query".

    • New "Zoom and pan" buttons to quickly change the search interval: Double the time-span or move the search interval 1/8th of the span to either side.

    • When your load-balancer does not act as "sticky" as described in Installing Using Containers, Humio now internally proxies requests to the proper internal node for search requests.

    • Write Humio metrics into the new repo humio-metrics. Any user can query metrics but only for the repos they can search. Looking at metrics that are not repo-specific requires being a member of the humio-metrics repo.

    • Allow any user to query the humio-audit log, but only for the actions of the user. Looking at the actions of others requires being a member of the humio-audit repo.

    • Changes to desired digest partition to node assignments did not get reflected in other nodes until a restart of the other nodes.

  • Functions

Humio Server 1.2.10 Archive (2019-01-28)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.2.10Archive2019-01-28

Cloud

2020-11-30No1.2.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Load Balanced Queries, Optimizations and Bug Fixes

Fixed in this release

  • Summary

    • Added HTTP header to support loadbalancing queries. The header Humio-Query-Session is described in Installing Using Containers.

    • parseCsv did not handle broken input gracefully.

    • New built-in parser for the popular .NET Serilog logging library.

    • Improved HEC performance.

Humio Server 1.2.9 Archive (2019-01-18)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.2.9Archive2019-01-18

Cloud

2020-11-30No1.2.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Maintenance Build

Fixed in this release

  • Summary

    • Delete of queries on the http endpoint now lets the query live for 5 seconds internally, to allow reusing the same query if resubmitted.

    • RetentionJob would not delete remaining segments marked for deletion if one delete failed.

Humio Server 1.2.8 Archive (2019-01-17)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.2.8Archive2019-01-17

Cloud

2020-11-30No1.2.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Maintenance Build

Fixed in this release

  • Summary

    • Live queries in a cluster where not all servers had digest partitions could lead to events being stuck in the result when they should have been outside the query range at that point int time.

    • Better names for the metrics exposed on JMX. They are all in the com.humio.metrics package.

    • Cloning built-in parsers made them read-only which was not intentional.

    • Config KAFKA_DELETES_ALLOWED can be set to "true" to turn on deletes on the ingest queue even when KAFKA_MANAGED_BY_HUMIO=false.

    • Support for applying a custom parser to input events from any "beat" ingester by assigning the parser to the ingest token.

    • Handle HTTP 413 errors when uploading too large files on the files page

  • Functions

    • New function, mostly for use in parsers scope: parseCsv() parses comma-separated fields into columns by name.

Humio Server 1.2.7 Archive (2019-01-15)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.2.7Archive2019-01-15

Cloud

2020-11-30No1.2.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Maintenance Build

Fixed in this release

  • Summary

    • New function eventSize that provides an estimate of the number of bytes being used to represent the event, uncompressed.

    • enable escape to clear sticky events in all scenarios

    • A race condition could lead to memory being leaked.

    • Humio metrics on the Prometheus endpoint now have help texts and use labels where appropriate.

    • Short time zone names such as "EST" did not work properly in function that accept a time zone name.

    • S3 archiving has completed testing with customers and is no longer considered BETA, but ready for use now.

    • Export as CSV allows selecting the fields in the download dialog when the query does no set the fields through table or select.

    • A new built-in parser for "syslog" in the format of both the old and new RFC, that uses case to auto-detect the format.

Humio Server 1.2.6 Archive (2019-01-11)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.2.6Archive2019-01-11

Cloud

2020-11-30No1.2.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Maintenance Build

Fixed in this release

  • Summary

    • S3 archiving: Include all tag keys in generated file names, also those listed in the configuration.

    • Allow GET/HEAD on elastic _bulk emulation API without auth. Some client poll that API before posting event.

    • Extracting a field from within a tag-field could make the query optimizer fail.

    • When using select() and not including @timestamp, that field got included in exported files anyway. Now it gets included when specified as a selected field.

    • Expose Humio metrics as JMX.

    • Allow both Basic-auth and OAuth on all ingest endpoints. We recommend putting tokens in the password field of the authentication.

    • Expose Humio metrics to Prometheus. The port needs to be configured using the configuration parameter PROMETHEUS_METRICS_PORT.

    • HEC endpoints now accepts input from Docker Splunk logging driver. You can thus get your docker container logs into Humio using this logging driver. All you need to do is add --log-driver=splunk --log-opt splunk-token=$TOKEN --log-opt splunk-url=https://humioserver to your docker run.

    • Calendar in query interval selector had time zone problems.

  • Automation and Alerts

    • Improved detection of alerts that are canceled to get them restarted.

  • Functions

    • stats() function (the [] operator for functions) did not pass on the data used to select default widget.

    • worldMap() function now accepts the precision parameter for the geohash function embedded inside worldMap().

Humio Server 1.2.5 Archive (2019-01-09)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.2.5Archive2019-01-09

Cloud

2020-11-30No1.2.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Maintenance Build

Fixed in this release

  • Summary

    • Timeouts on the http endpoint have been changed from 60s to infinite. This allows exporting from queries that hit very little data, e.g. a live query that receives one event every hour.

    • When running with PREFIX_AUTHORIZATION_ENABLED=true Alerts and Shared dashboards now run as the user who saved them, restricted to those prefixes that the users has at the time the query starts.

    • Added new query functions lower and upper.

    • Query performance improved by fixing a bottleneck that was noticeable on CPUs with more than 16 cores.

    • HEC protocol now accepts data at "/services/collector" url too. And accepts authorization in the form of a "Authorization" header with any realm name, as long as the token is a valid Humio token. This allows using e.g fluentd and other software to ship to Humio using HEC.

    • Segments with blocks where all timestamps are zero were reported as broken when trying to read them.

    • Allow * as fields for lowercase function to allow lower casing all field names and values. Recommended use case is in the ingest pipeline as this is an expensive operation.

    • Basic auth (used mostly on ingest endpoints) now allows putting the token into the password field instead of the username field. Use of the password field is recommended as some software treats the password as secret and the username as public.

    • Audit-logging did not happen for queries using the "/query" endpoint i.e. using the export button in the UI.

    • If the parser ends up setting a timestamp before 1971, or does not set a timestamp, use now as timestamp for the ingested event. Same for timestamps more than 10 seconds into the future.

  • Configuration

    • Humio will by default write threaddumps to the file humio-threaddumps.log every 10 seconds. This is configurable using the configuration parameter DUMP_THREADS_SECONDS. Before this was disabled by default.

Humio Server 1.2.4 Archive (2019-01-02)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.2.4Archive2019-01-02

Cloud

2020-11-30No1.2.0Yes

Available for download two days after release.

Hide file hashes

Show file hashes

Secondary storage of segment files

Fixed in this release

  • Summary

    • Performance fix: Running live queries for weeks with a small time span for the bucket size was expensive.

    • Extended the internal latency measurement to include the time spent in the custom parsers as well.

    • When a segment file was deleted while being scheduled in a query, the query would end up being "99%" done and never complete.

    • Secondary Storage of segment files. This allows using a "fast" disk primarily, and a "slow" one for older files.

    • Ingesting with HTTP Event Collector (HEC) out of beta. Endpoint is located at /api/v1/ingest/hec.

    • Deleting an ingest listener did not stop the listener.

Humio Server 1.2.3 Archive (2018-12-18)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.2.3Archive2018-12-18

Cloud

2020-11-30No1.2.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Maintenance Build

Fixed in this release

  • Summary

    • Performance improvement: queries with NOT NotMatching were much slower than the plainfilter NotMatching.

    • New ingest endpoints without the repo in the path, as the repo is specified by the token for authentication, repo and and parser selection.

    • Widget auto-selection improved.

    • Default to running queries on only vcores/2 threads.

    • Display of query speed in clustered version was multiplied by (n+1)/n in a n-node cluster.

Humio Server 1.2.2 Archive (2018-12-14)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.2.2Archive2018-12-14

Cloud

2020-11-30No1.2.0Yes

Available for download two days after release.

Hide file hashes

Show file hashes

Maintenance Build

Fixed in this release

  • Summary

    • Configuration change. ALLOW_UNLIMITED_STATE_SIZE has been replaced by MAX_STATE_LIMIT. MAX_STATE_LIMIT limits state size in Humio searches and now allows specifying a number. For example the number of groups in the groupBy() function is limited by MAX_STATE_LIMIT.

    • sandbox did not work properly with PREFIX_AUTHORIZATION_ENABLED=true

Humio Server 1.2.1 Archive (2018-12-13)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.2.1Archive2018-12-13

Cloud

2020-11-30No1.2.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Maintenance Build

Fixed in this release

  • Summary

    • Improve maximum throughput of a TCP-listener ingest up to 4 times the previous level for a single socket. Maximum throughput can reach .0.0 events/s when testing with .0 bytes/event on localhost. Use more sockets in parallel to achieve higher throughput.

    • Editing a parser with syntax errors did not work.

    • When pushing the query sub-system to the limit with many simultaneous long-running live queries for more than 10 seconds, a query could end up triggering a restart of itself.

Humio Server 1.2.0 Archive (2018-12-11)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.2.0Archive2018-12-11

Cloud

2020-11-30No1.2.0Yes

Available for download two days after release.

Hide file hashes

Show file hashes

Create parsers using Humio's search language. Changes to "Backup" layout.

Fixed in this release

  • Summary

    • In a cluster where any node did not have any digest roles, queries could get polled much too frequently.

    • kvParse() function no longer overrides existing fields by default. To override existing fields based on input use: kvParse(override=true). See docs kvParse().

    • New parsers. It is now possible to create parsers using Humio's search syntax. Check out the Creating a Parser documentation. Existing parsers has not been migrated and it is still possible to use the old parsers. We encourage using the new parsers and will automatically migrate old parsers in a future release.

    • Blacklist queries. In the administration section of Humio it is now possible to blacklist queries. This can also be done from the Query Monitor page, by clicking a query and then block it in the details section, or using the Query Blacklist page directly.

    • The parser overview page now shows parser errors. This is a quick way to detect if parsers are working as expected.

    • The backup feature now stores the copies of the segment files in separate folders for each Humio node. This allows the Humio nodes to delete files that are no longer owned by that node also in the case where all Humio nodes share a shared network drive. This change has the effect that existing backups are no longer valid and cannot be read by this version. Delete any existing backups when upgrading, or reconfigure Humio to use a fresh location for the backups.

    • parseTimestamp() function has changed signature. The parameter nowIfNone has been removed and a new addErrors introduced. This can break existing searches/alerts/dashboards (but the parameter has not been widely used). See docs parseTimestamp().

Humio Server 1.1.37 Archive (2018-12-03)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.1.37Archive2018-12-03

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Maintenance Build

Fixed in this release

  • Summary

    • Field-extracting using regex did not work in live queries in an implicit AND.

    • Fix bug in UI when uploading file.

    • Add debug logs for LDAP login.

Humio Server 1.1.36 Archive (2018-11-28)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.1.36Archive2018-11-28

Cloud

2020-11-30No1.1.0Yes

Available for download two days after release.

Hide file hashes

Show file hashes

Role-based auth support for SAML & LDAP

Fixed in this release

  • Summary

    • Config variable AUTO_CREATE_USER_ON_SUCCESSFULL_LOGIN renamed to (the correctly spelled) AUTO_CREATE_USER_ON_SUCCESSFUL_LOGIN.

    • GELF over HTTP support. Note that this format is a good fit for uncommon events, but due to lack of bulk support not efficient for streams with high amounts of traffic. Authentication is required using basic auth with an ingest token (or personal API token, but using that is not recommended).

    • Role-based access control is now supported for on-prem when using SAML or LDAP for authentication.

    • Set thread priorities on internal threads.

  • Functions

    • Extended session() function to accept an array of function instead of only one.

Humio Server 1.1.35 Archive (2018-11-27)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.1.35Archive2018-11-27

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Graylog compatible ingest support.

Fixed in this release

  • Summary

    • Allow ingest in "GELF" v1.1 format. See GELF Payload Specification. Humio supports ingest on using the UDP and UDP chunked encodings, and both may optionally be compressed using ZLIB. (Gzip not supported yet.) TCP is supported as zero-byte-delimited uncompressed.

  • Automation and Alerts

    • Alerts did not properly encode all parts of the query in the URL that is sent in the notification.

Humio Server 1.1.34 Archive (2018-11-22)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.1.34Archive2018-11-22

Cloud

2020-11-30No1.1.0Yes

Available for download two days after release.

Hide file hashes

Show file hashes

Improved LDAP support

Fixed in this release

  • Summary

    • By default users must be added inside Humio before they can log in using external authentication methods like LDAP and SAML. This can be controlled using the configuration flag AUTO_CREATE_USER_ON_SUCCESSFUL_LOGIN=false. If users are auto created in Humio when they successfully login for the first time, the user will not have access to any repositories unless explicitly granted. A new user will only be able to access the users personal sandbox.

  • Functions

    • Bug Fix for match() function. In some cases it did not match quoted strings.

Humio Server 1.1.33 Archive (2018-11-15)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.1.33Archive2018-11-15

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Bug Fix and digest performance

Fixed in this release

  • Summary

    • Digest throughput improvements

    • Fixed: Parsers did not show the build-in parsers

Humio Server 1.1.32 Archive (2018-11-15)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.1.32Archive2018-11-15

Cloud

2020-11-30No1.1.0Yes

Available for download two days after release.

Hide file hashes

Show file hashes

stripAnsiCodes() function, top on multiple fields and Bug Fixes, default repository query

Fixed in this release

  • Summary

    • Repositories' default search interval has been replaced with the possibility to choose a default repository query. All default search intervals will be migrated to default queries. A default query can be set by saving a query and checking the "Use as default" checkbox.

    • Added support for Java 11. Humio can now be run with Java 9 or Java 11. Humio's Docker images are updated to use Java 11 and we encourage people to update to Java 11 and use Azul's OpenJDK Zulu builds.

  • Configuration

  • Functions

    • New range() function: finds numeric range between the smallest and largest numbers for the specified field over a set of events.

    • New stripAnsiCodes() function: strips ANSI color codes from a field.

    • top() function now supports more than one field to group on combination of fields.

Humio Server 1.1.31 Archive (2018-11-09)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.1.31Archive2018-11-09

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Reduce latency of incoming events.

Fixed in this release

  • Summary

    • Improved build-in dashboards, allowing them to be shared using share links like any other dashboard.

    • The latency measured from an event arrive at Humio until live queries have been updated with that event has been reduced by approximately 1 second to now be in the 0 - .0 millisecond range.

    • It is now possible to block ingestion in a repository. It can be done from the repository's settings page.

Humio Server 1.1.30 Archive (2018-11-04)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.1.30Archive2018-11-04

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Hide file hashes

Show file hashes

'Create Parser' button opened a beta page for creating parsers

Fixed in this release

  • Summary

    • 'Create Parser' button opened a beta page for creating parsers.

    • Handle clients posting empty bulks of events.

Humio Server 1.1.29 Archive (2018-11-02)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.1.29Archive2018-11-02

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Bug Fixes

Fixed in this release

  • Summary

    • Allow '.' in S3 paths.

    • Live queries could get false sharing of eval() results.

  • Configuration

Humio Server 1.1.28 Archive (2018-10-31)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.1.28Archive2018-10-31

Cloud

2020-11-30No1.1.0Yes

Available for download two days after release.

Hide file hashes

Show file hashes

Improved SAML authentication and digest performance

Fixed in this release

  • Summary

    • When zooming to a wider time range on a timechart with a fixed "span" parameter, widen the span and a dd a warning to allow the chart to work instead of failing with "too many bucket".

    • Back-pressure on ingest should not be applied to internal log lines, such as the internal debug and audit log entries.

    • The first search to hit a repository in a cluster with millions of segments would fail while listing those files.

    • Dashboard searches are kept running for 3 days, when they are not polled. After that they are not kept alive on the server. This is configurable using the config IDLE_POLL_TIME_BEFORE_DASHBOARD_QUERY_IS_CANCELLED_MINUTES. This replaces IDLE_POLL_TIME_BEFORE_LIVE_QUERY_IS_CANCELLED_MINUTES.

    • Performance improvements for digest on systems with many sparse datasources.

Humio Server 1.1.27 Archive (2018-10-24)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.1.27Archive2018-10-24

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Back-pressure on ingest overload and Bug Fixes

Fixed in this release

  • Summary

    • When too much data flows into Humio for Humio to keep up, apply back-pressure by responding statuscode 503 and header Retry-After: .0.

    • The event list on the search page now correctly resets the widget when a new search is started

    • The max value of the y-axis of timecharts is now correctly updated on new results

    • Many changes internally to prepare for having more than one node in the "Digest rules" for fail-over handling of ingest traffic.

    • Pagination now works for tables on dashboards

Humio Server 1.1.26 Archive (2018-10-15)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.1.26Archive2018-10-15

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Hide file hashes

Show file hashes

Minor Release

Fixed in this release

  • Summary

    • Performance improvement in table, sort and tail especially when using a large limit.

    • Using the field message instead of log (as described in v1.1.25) did not work properly.

  • Functions

    • select() function did not render results if they did not have @timestamp and @id . (The select() is like table() but unsorted and allows exporting an unbounded set of events).

    • session() function could miss events in some situations.

Humio Server 1.1.25 Archive (2018-10-12)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.1.25Archive2018-10-12

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Adds World Map and Sankey visualizations and SAML authentication support.

Fixed in this release

  • Summary

    • S3 archiving now handles grouped tags properly, generating one file for each tag combination also for grouped tags.

    • The new visualizations require a change to the CSP. If you have your own CSP, you need to add 'unsafe-eval' to the script-src key.

    • Importing repositories from another Humio instance has repository ID where repository name was required.

    • New visualization helper functions geohash(), worldMap(), sankey().

    • The update services widget that "phones home" to update.humio.com can now only be disabled if you have a license installed.

    • New visualizations: World Map and Sankey .

    • Support using filebeat to ship logs from Helm chart for ingest logs from a Kubernetes cluster. The message can be in the field log or message.

    • Searching using ... | *foo*** | ... is identical to ... | foo | ... since plain text searches are always substring matches. But the former got turned into a full-string regex match for ^.*foo.*$ which is 10-30 times slower compared to the fast substring search in Humio.

    • New query syntax: match on a field that eases matching for several cases on a single field.

  • Functions

    • split() function could use the timestamp of one event in more of the following events than those that originated in that event.

    • Performance regression in functions table(), sort(), tail() and head() that slowed them down a lot when the limit was larger than .0.

Humio Server 1.1.24 Archive (2018-10-05)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.1.24Archive2018-10-05

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Maintenance Build

Fixed in this release

  • Summary

    • Segments deleted by retention-by-size would sometimes get left behind in global, adding warnings to users searching at intervals including deleted segments.

    • Reorder query prefixes to execute queries more efficiently. Moves tags to the front of query string to allow better start-of-query datasource filtering.

Humio Server 1.1.23 Archive (2018-10-01)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.1.23Archive2018-10-01

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Update Kafka server version to 1.100.0

Fixed in this release

  • Summary

    • Do not reassign partitions in Kafka when there is already sufficient replicas (Only applied when KAFKA_MANAGED_BY_HUMIO=true, the default).

    • Handle empty uploaded files.

    • Humio's Kafka and ZooKeeper Docker images have been upgraded to use Kafka 1.100.0. We recommended to keep the update procedure simple and not do a rolling upgrade. Instead shutdown Humio Kafka and ZooKeeper. Then fetch the new images and start ZooKeeper, Kafka and Humio. For details see Kafka's documentation for upgrading. (Note: This change was listed in release notes for v1.1.20 even though it was applied only to the kafka client there, and not to the server).

    • Improved performance of parsers that have (?<@timestamp>\S+) as their timestamp extractor regex.

    • The query planner has been improved, so it can more precisely limit which data to search based on tags.

  • Configuration

    • Do not remove other topic configs in Kafka when setting those needed by humio (Only applied when KAFKA_MANAGED_BY_HUMIO=true, the default).

Humio Server 1.1.22 Archive (2018-09-27)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.1.22Archive2018-09-27

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Fix Kafka prefix configuration problem. Faster percentiles. Allow globbing when using match().

Fixed in this release

  • Summary

    • The key/value parser now (also) considers characters below 0x20 to be separators. Good for e.g. FIX-format messages.

    • The UI for setting node ID on an ingest listener did not work.

    • Add flag match(..., glob=true|false) allowing the key column of a CSV file to include globbing with *.

    • If using Kafka prefix configuration, the server would always assume Kafka has been reset. Releases 1.1.20 introduced this problem.

  • Functions

    • percentile() function changed to use 32bit precision floating point making it ~3x faster.

Humio Server 1.1.21 Archive (2018-09-25)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.1.21Archive2018-09-25

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Bug fix release.

Fixed in this release

  • Summary

    • The implicit tail(.0) that is applied when no aggregate function is in the query input did not sort properly in certain cases.

    • Query Monitor now also shows cpu time spent in the last 5 seconds.

    • Timechart on views broke in previous version 1.1.20

Humio Server 1.1.20 Archive (2018-09-24)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.1.20Archive2018-09-24

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Use Kafka version 1.100.0

Fixed in this release

  • Summary

    • Humio's Kafka and ZooKeeper Docker images have been upgraded to use Kafka 1.100.0.~~ (Update: See 1.1.23)

    • Added the possibility to add extra Kafka configuration properties to Kafka consumers and producers by pointing to a properties file using EXTRA_KAFKA_CONFIGS_FILE. This makes it possible to connect to a Kafka cluster using SSL and SASL.

    • Humio is upgraded to use the Kafka 2.0 client. It is still possible to connect to a Kafka running version 1.X

Humio Server 1.1.19 Archive (2018-09-21)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.1.19Archive2018-09-21

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Cluster administration, dashboard clone/export/import and faster startup on large datasets.

Fixed in this release

  • Summary

    • Auto-nice long running queries, making them take on lower priority compared to younger queries, measured by cputime spent.

    • Allow negative index in splitString, which then selects from the end instead of from the start. -1 is the last element.

    • Fix #2263, support for !match().

    • Generate pretty @display value in split() function.

    • HUMIO_KAFKA_TOPIC_PREFIX was not applied to all topics used by Humio, only where the name matched global-*".

    • Startup of the server is now much faster on large datasets.

    • Setting INGEST_QUEUE_INITIAL_PARTITIONS in config decides the initial number of partition in the ingest queue. This only has effect when starting a fresh Humio cluster with no existing data.

    • Dashboards can now be copied to other repos and exported and imported as templates.

    • Faster response on cluster management and entering the search page.

    • Upgrading to this version requires running at least v1.1.0. If you run an older version, upgrade to v1.1.18, then v1.1.19.

    • New cluster management actions for reassigning partitions to hosts and moving existing data to other hosts.

Humio Server 1.1.18 Archive (2018-09-13)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.1.18Archive2018-09-13

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Ease using your own Kafka including older versions of Kafka.

Fixed in this release

  • Summary

    • Ease using your own Kafka including older versions of Kafka

    • Added MAX_HOURS_SEGMENT_OPEN to the number of hours after which you want a segment closed and a new one started even if it has not filled up. Note that you may want to disable segment merging in this case to preserve these smaller segment files by also setting ENABLE_SEGMENT_MERGING=false

    • Set KAFKA_MANAGED_BY_HUMIO=false to stop Humio from increasing the replication of the topics in Kafka.

Humio Server 1.1.17 Archive (2018-09-10)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.1.17Archive2018-09-10

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Show Events Per Second (EPS) when searching.

Fixed in this release

  • Summary

    • Improve search performance when adding fields to events.

    • Dashboards can now be copied to other views.

    • Show Events Per Second (EPS) when searching.

Humio Server 1.1.16 Archive (2018-09-06)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.1.16Archive2018-09-06

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Add CRC to data files. Migrates data to support upcoming features to later serve as a potential roll back point.

Fixed in this release

  • Summary

    • New 'cluster overview' tab in admin page (this is work in progress, feedback appreciated).

    • Bug Fix. Regular expressions using /.../ syntax sometimes matched incorrectly.

    • Scheduling of queries now take cpu time spent in each into account, allowing new queries to get more execution time than long-running queries.

    • Adds CRC32c to the segment file contents.

    • Support CSV downloads. End the query with | table([...]) or | select([...]) to choose columns.

    • Note! v1.1.15 is able to read the files generated by v1.1.16. Rolling back to version 1.1.14 or earlier is not possible, as those versions cannot read the files that have CRC.

    • Regular expression matching with a 'plain' prefix is now faster.

Humio Server 1.1.15 Archive (2018-09-03)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.1.15Archive2018-09-03

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Support for setting parsers in our Kubernetes integration and a new parseHexString() function.

Fixed in this release

  • Summary

    • Updated instructions for configuring the PagerDuty notifier.

    • Bug Fix. Tables now sort globally instead per page.

    • Our Helm chart for ingest logs from a Kubernetes cluster now support setting a parser using the pod label humio-parser.

      For more information, see Use Case: Migrating from Helm Chart to Operator.

  • Functions

Humio Server 1.1.14 Archive (2018-08-21)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.1.14Archive2018-08-21

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Starring alerts and Introduce a displaystring for formatting log strings.

Fixed in this release

  • Summary

    • Bug Fix. Slack notifier had the message twice in the request.

    • Improve Netflow parser to handle packets coming out of order.

    • Introduced @displaystring.

  • Automation and Alerts

    • Starring alerts. Get your favorite alerts to the top of the list.

Humio Server 1.1.13 Archive (2018-08-16)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.1.13Archive2018-08-16

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Improve LDAP support.

Fixed in this release

  • Summary

    • Enable logging in with LDAP without providing domain name. Domain name can be set as a config using LDAP_DOMAIN_NAME. See Authenticating with LDAP.

    • Enforce an upper bound of the number of fields allowed for one event. The limit is .0. If an event has too many fields, .0 are included.

Humio Server 1.1.12 Archive (2018-08-15)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.1.12Archive2018-08-15

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

SMTP support for sending emails, new Files UI and support for self-signed certs for ldap.

Fixed in this release

  • Summary

    • New Files UI. Possible to manage files for use in the lookup() function.

    • LDAP users must now be added with their domain. For example add (instead of just user). Existing users are migrated by the system, so no actions are required.

    • Segment file replication did not (re-)fetch a segment file if the file was missing on disk while the "global" state claimed it was present.

    • Eliminate the backtics syntax from eval(), the same effect can be obtained with transpose().

    • Add operators >, <, >=, <=, and % to eval expressions.

    • SMTP support for Email Configuration

    • LDAPS can use a self-signed certificate through config.

  • Functions

Humio Server 1.1.11 Archive (2018-08-03)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.1.11Archive2018-08-03

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Minor Release

Fixed in this release

  • Summary

    • "Export to file" did not work in the UI.

    • Performance improvement in the internal logging to the Humio dataspace.

    • Eliminated a race condition in the ingest pipeline that could drop data in overload conditions.

Humio Server 1.1.10 Archive (2018-08-02)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.1.10Archive2018-08-02

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Minor Release

Fixed in this release

  • Summary

    • Now using Google's RE2/j as the default, not the JDK's. Can be configured using USE_JAVA_REGEX.

    • Autosharding now happens after tag grouping. Improves performance in case where some datasources are slow and others very fast when those are grouped.

Humio Server 1.1.9 Archive (2018-07-30)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.1.9Archive2018-07-30

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Minor Release

Fixed in this release

  • Summary

    • Bug Fix. When encountering a broken segment file, let the server start and ignore the broken file.

Humio Server 1.1.8 Archive (2018-07-26)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.1.8Archive2018-07-26

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Minor Release

Fixed in this release

  • Summary

    • Bug Fix. Autosharded tags should not get tag-grouped.

    • Improve handling color codes in Humio's built in key-value parser

Humio Server 1.1.7 Archive (2018-07-05)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.1.7Archive2018-07-05

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Minor Release

Fixed in this release

  • Summary

    • Bug Fix. Remove race condition that could create duplicate events on restart.

    • Update embedded GeoLite2 database to 20180703 version.

    • Verify Java version requirement on startup.

    • Datasource autosharding is now able to reduce the number of shards.

  • Functions

    • Bug Fix. Fix split() to allow splitting JSON arrays of simple values.

    • New function match() is like lookup() but better suited for filtering.

    • Bug Fix. sort() with multiple fields was not stable for missing values.

Humio Server 1.1.6 Archive (2018-07-04)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.1.6Archive2018-07-04

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Minor Release

Fixed in this release

  • Summary

    • Bug Fix. Log rotation of Humio's own log files. Files was not deleted, but now they are.

    • Improved datasource autosharding to be less eager.

    • Bug Fix. Viewing details of a logline while doing a live query did not pause the stream. This resulted in the details view being closed when the logline went out of scope.

    • Restructured documentation.

Humio Server 1.1.5 Archive (2018-06-28)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.1.5Archive2018-06-28

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Minor Release

Fixed in this release

  • Summary

    • Remove supervisors from the Docker image humio/humio-core

Humio Server 1.1.4 Archive (2018-06-28)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.1.4Archive2018-06-28

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Minor Release

Fixed in this release

  • Summary

    • Bug Fix. Repo admins that are allowed to delete data can now delete datasources.

Humio Server 1.1.3 Archive (2018-06-27)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.1.3Archive2018-06-27

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Minor Release

Fixed in this release

  • Summary

    • Fix memory problem when streaming events using the query endpoint.

    • Rename a widget on dashboard directly from the dashboard itself.

    • Supporting links in tables.

Humio Server 1.1.2 Archive (2018-06-25)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.1.2Archive2018-06-25

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Minor Release

Fixed in this release

  • Summary

    • Fix clock on dashboards page.

    • Fix creating the sandbox dataspace in the signup flow.

  • Automation and Alerts

    • Allow fields in alert webhooks.

Humio Server 1.1.1 Archive (2018-06-21)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.1.1Archive2018-06-21

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Minor Release

Fixed in this release

  • Summary

    • fix fullscreen mode for readonly dashboards

Humio Server 1.1.0 Archive (2018-06-21)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.1.0Archive2018-06-21

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Minor Release

Fixed in this release

  • Summary

    • Added documentation for Kafka Connect Log Format.

    • It is not possible to rollback to previous versions when upgrading. Backup Global data by copying the file /data/humio-data/global-data-snapshot.json. Then it will be possible to rollback (with the possibility of loosing new datasources, users, dashboards etc that was created while running this version).

    • Moved some of the edit options from the dashboard list to the dashboard itself.

    • Amazon AMI available in the Amazon marketplace.

    • Improved Fluentbit integration to better support ingesting logs from Kubernetes.

    • Dataspaces has been split into views and repositories. This allows searching across multiple repositories and adds support for fine grained access permissions. Read the introduction in this blogpost and check out the Repositories & Views documentation.

  • Functions

Humio Server 1.0.69 Archive (2018-06-12)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.69Archive2018-06-12

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Hotfix Release

Fixed in this release

  • Summary

    • Canceling a query in 1.0.68 would consume resources, blocking worker threads for a long time. Please upgrade.

Humio Server 1.0.68 Archive (2018-06-11)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.68Archive2018-06-11

Cloud

2020-11-30No1.1.0Yes

Available for download two days after release.

Regular update release.

Fixed in this release

  • Summary

    • Changes to Humio's logging. Humio now logs to 2 files /data/logs/humio-debug.log and /data/logs/humio_std_out.log. Std out has become less noisy and is mostly error logging. This is only relevant for on-prem installations.

    • Ingest queue replication factor in Kafka is now by default set to 2 (was 1). If it is currently set to 1 Humio will increase it to 2. The configuration parameter INGEST_QUEUE_REPLICATION_FACTOR can be used to control the replication factor.

    • Deeplinking did not work in combination with having to log in.

Humio Server 1.0.67 Archive (2018-06-01)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.67Archive2018-06-01

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Regular update release.

Fixed in this release

  • Summary

    • Support for GDPR: Hardened Audit Logging.

    • Improved search performance when reading data from spinning disk

Humio Server 1.0.66 Archive (2018-05-23)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.66Archive2018-05-23

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Minor update release.

Fixed in this release

  • Summary

    • Added "prune replicas" method on on-premises HTTP API to remove extra copies when reducing replica count in cluster.

    • Increased default thread pool sizes a bit, but still only 1/4 of what the were before 1.0.65

Humio Server 1.0.65 Archive (2018-05-22)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.65Archive2018-05-22

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Minor update release.

Fixed in this release

  • Summary

    • Search performance improvement: Reduce GC-pressure from reading files.

    • Reduced default thread pool sizes.

    • Importing a dataspace from another Humio instance did not handle multi-node cluster properly

Humio Server 1.0.64 Archive (2018-05-15)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.64Archive2018-05-15

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Minor update release.

Fixed in this release

  • Summary

    • Search scheduling is more fair in cases with multiple heavy searches.

Humio Server 1.0.63 Archive (2018-05-15)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.63Archive2018-05-15

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Regular update release.

Fixed in this release

  • Summary

    • Read segment files using read instead of mmap.

  • Automation and Alerts

    • Bug Fix. Alerts could end up not being run after restarting a query.

Humio Server 1.0.62 Archive (2018-05-09)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.62Archive2018-05-09

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Minor update release.

Fixed in this release

  • Summary

    • Increase timeout for http query requests.

Humio Server 1.0.61 Archive (2018-05-08)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.61Archive2018-05-08

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Minor update release.

Fixed in this release

  • Summary

    • Worked on http request handling - do not starve requests under load.

    • Improved "connect points" option in timecharts.

Humio Server 1.0.60 Archive (2018-05-04)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.60Archive2018-05-04

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Minor update release.

Fixed in this release

  • Summary

    • Timeout idle http connections after 60 seconds.

    • Removed logging of verbose data structure when querying.

    • Increase maximum allowed http connections to 2.00.

    • Fix dashboard links on frontpage.

    • Removed error logging when tokens has expired.

    • Possible to expose Elastic compatible endpoint on port 9.0, which is the Elastic default. Use the configuration parameter ELASTIC_PORT

Humio Server 1.0.59 Archive (2018-04-26)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.59Archive2018-04-26

Cloud

2020-11-30No1.1.0Yes

Available for download two days after release.

Regular update release.

Requires data migration and configuration changes — Auth0 changes.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The configuration options AUTH0_API_CLIENT_ID and AUTH0_API_CLIENT_SECRET have been deprecated in favor of AUTH0_CLIENT_ID and AUTH0_CLIENT_SECRET respectively - the old names will continue to work as aliases.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Summary

    • If you are using Auth0 in your on-prem installation of Humio you must update your Auth0 Application configuration and re-configure Humio(or start using your OAuth identity provider directly). We at Humio will be happy to help. Below configuration changes are only relevant if Auth0 is used for authentication:

Fixed in this release

  • Summary

    • New convenience syntax for passing the as parameter using assignment syntax. minx := min(x) is equivalent to min(x, as=minx). This can be used at top-level | between bars |, or within [ array blocks ].

    • The parser handles left and right double quotes which can easily occur if you edit your queries in a word processor, i.e., Protocol := "UDP - 17"

    • The Auth0 configuration properties AUTH0_WEB_CLIENT_ID and AUTH0_WEB_CLIENT_SECRET have been removed. You can safely delete the associated Auth0, as Humio only requires on Auth0 Application in the future.

    • New syntax for computing multiple aggregates for example, to compute both min and max ... | [min(foo), max(foo)] | .... This syntax is shorthand for the stats() function.

    • Existing users on cloud.humio.com will need to re-authenticate the application 'humio' to use their account information.

    • Users that are authenticated through Auth0 will need to configure the PUBLIC_URL option, you must add add

      INI
      $PUBLIC_URL/auth/auth0

      To the list of callback URLs in your Auth0 Application.

    • New convenience syntax for passing the field= parameter to a function using curly assignment syntax. ip_addr =~ cidr("127.0.0.1/24") is equivalent to cidr("127.0.0.1/24", field=ip_addr). This can also be used for regex i.e., name =~ regex("foo.*").

    • The configuration option AUTH0_WEB_CLIENT_ID_BASE64ENC has been remove.

    • Humio Auth0 no longer requires the grant read:users, you can safely disable that on your Auth0 Application - or just leave it.

    • New naming convention for function names is camelCase() which is now reflected in documentation and examples. Functions are matched case-insensitively, so the change is backwards compatible.

    • Humio now support authenticating with Google, GitHub and Atlassian/Bitbucket directly (see Authenticating with OAuth Protocol), without the need to go through Auth0. This is part of our GDPR efforts for our customers on cloud.humio.com, so as to avoid more third parties involved with your data than necessary.

    • Renamed the alt keyword to case. alt will still work for a few releases but is now deprecated.

    • Depending on how you set up your Auth0 application, you may need to update your Auth0 Application Type to "Regular Web Application" in the your Auth0 account, more details can be found in our Authenticating with OAuth Protocol documentation.

    • The head() function allows you to do deduplication by using groupBy([ field1, field2, ... ], function=head(1))

  • Functions

Humio Server 1.0.58 Archive (2018-04-19)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.58Archive2018-04-19

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Regular update release.

Fixed in this release

  • Summary

    • Improved versioning. The version now starts with an actual version number. This version matches the version in Docker Hub.

    • Documentation has moved into its own project online at https://docs.humio.com.

    • JSON parsers can be configured to parse nested JSON. That means it will look at all strings inside the JSON and check if they are actually JSON.

    • Small improvements to Grafana plugin.

    • New on-boarding flow supporting downloading and running Humio.

    • Humio is available as a downloadable Docker image. It can be used in trial mode for a month. After that a license is required.

Humio Server 1.0.57 Archive (2018-04-16)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.57Archive2018-04-16

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Cloud-only release.

Fixed in this release

  • Summary

    • Added an update service widget to the menu bar that will announce new updates and give access to release notes directly in Humio. The service contacts a remote service: update.humio.com. If you do not want to allow this communication you can disable it from the Root Administration interface.

    • Updated Humio and Kafka Docker images to use Java 9.

    • New Query coordinator for handling distributed queries. This should improve the error messages on communication problems.

Humio Server 1.0.56 Archive (2018-03-26)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.56Archive2018-03-26

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Bug Fix Release.

Fixed in this release

  • Summary

    • Race condition in segment merging code. Could lead to loss of data when changing size of segment files. The problem was introduced in the previous release as a case of the out-of-order processing fix.

    • Auto suggestions selection using mouse.

Humio Server 1.0.55 Archive (2018-03-22)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.55Archive2018-03-22

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Regular update release.

Fixed in this release

  • Summary

    • JSON is not pretty printed when showing the details for an event in the message tab.

    • Improved Grafana integration).

    • Added a JSON tab when showing event details. The tab pretty prints the event and is only visible for JSON data.

    • When system got overloaded - events could get lost if processed out of order in a datasource.

    • Improved ingest performance by tuning LZ4 compression.

Humio Server 1.0.54 Archive (2018-03-15)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.54Archive2018-03-15

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Regular update release.

Data migrations are required, but compatible both ways: Users on dataspace can now have multiple roles.

Fixed in this release

  • Summary

    • Audit Logging BETA feature. There is now a humio-audit dataspace with audit log of user actions on Humio.

    • "Export to file" failed on Sandbox dataspaces.

    • In uncommon cases when ingesting a large bulk of events that were not compressible at all, the non-compression could fail.

    • License keys in UI now ignore whitespace for ease of inserting keys with line breaks.

Humio Server 1.0.53 Archive (2018-03-13)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.53Archive2018-03-13

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Regular update release.

Fixed in this release

  • Summary

    • In some scenarios the browsers back button had to be clicked twice or more to go back.

    • Enter does not start search after navigating using the browsers back button

    • Introduced License Installation. Humio requires a license to run. It can run in trial mode with all features enabled for a month.

Humio Server 1.0.52 Archive (2018-03-06)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.52Archive2018-03-06

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Cloud-only release.

Fixed in this release

  • Summary

    • Make /regex/ work with AND and OR combinators.

    • Disconnect points on timecharts if there are empty buckets between them.

    • Labeling dashboards. Put labels on dashboards to organize them.

    • gzipping of http responses could hit an infinite loop, burning cpu until the process was restarted.

    • Starring dashboards. They will go to the top of the dashboard list and there is a section with starred dashboards on the frontpage.

Humio Server 1.0.51 Archive (2018-02-23)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.51Archive2018-02-23

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Minor Update Release

Fixed in this release

  • Summary

    • Fix bug: Retention was not deleting anything.

Humio Server 1.0.50 Archive (2018-02-22)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.50Archive2018-02-22

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Minor Update Release

Fixed in this release

  • Summary

    • Clustered on-premises installs could stall in the copying of completed segment files inside the cluster.

    • Fix issue with : occurring in certain query expressions, introduced with the new := syntax. A query such as foo:bar | ... using an unquoted string would fail to parse.

    • Allow | before and after query.

    • Allow saving dashboards with queries that do not parse. Allows editing dashboards where another widget is failing.

Humio Server 1.0.49 Archive (2018-02-21)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.49Archive2018-02-21

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Regular update release.

New features and improvements

  • Summary

    • Show Widget Queries on Dashboards. You can toggle displaying the queries that drive the widgets by clicking the "Code" button on dashboards. This makes it easier to write filters because you can peek at what fields are being used in your widgets.

    • Dashboard Filters. Dashboard Filters allow you to filter the data set across all widgets in a dashboard. This effectively means that you can use dashboards for drill-down and reuse dashboards with several configurations. Currently filters support writing filter expressions that are applied as prefixes to all your widgets' queries. We plan to extend this to support more complex parameterized set-ups soon - but for now, prefixing is a powerful tool that is adequate for most scenarios. Filters can be named and saved so you can quickly jump from e.g. Production Data to Data from your Staging Environment. You can also mark a filter as "Default". This means that the filter will automatically be applied when opening a dashboard.

    • Better URL handling in dashboards. The URL of a dashboard now includes more information about the current state or the UI. This means you can copy the URL and share it with others to link directly to what you are looking at. This includes dashboard time, active dashboard filter, and fullscreen parameters. This will make it easy to have wall monitors show the same dashboard but with different filters applied, and allow you to send links when you have changed the dashboard search interval.

Fixed in this release

  • Summary

    • Improvements to the query optimizer. Data source selection (choosing which data files to scan from disk) can now deal with more complex tag expressions. For instance, now queries involving OR, such as #tag1=foo OR #tag2=bar are now processed more efficiently. The query analyzer is also able to identify #tag=value elements everywhere in the query, not only in the beginning of the query.

    • Improvement: Better handling of reconnecting dashboards when updating a Humio instance.

    • Configure when Humio stops updating live queries (queries on dashboards) that are not viewed (not polled). This is now possible with the config option IDLE_POLL_TIME_BEFORE_LIVE_QUERY_IS_CANCELLED_MINUTES. Default is 1 hour.

    • Improvement: Better and faster query input field. We are using a new query input field where you should experience less "input lag" when writing queries. At the same time, syntax highlight has been tweaked, and while still not supporting some things like array notation, it is better than previous versions.

    • Clock on Dashboards. Making it easier to know what time/timezone Humio is displaying result for.

    • New alt language construct. This allows alternatives similar to case or cond in other languages. With:

      logscale Syntax
      ... | alt { <query>;
      <query>; ...; * } | ...

      Every event passing through will be tried to the alternatives in order until one emits an event. If you add ; * in the end, events will pass through unchanged even if no other queries match. Aggregate operators are not allowed in the alternative branches.

    • New eval syntax. As a shorthand for ... | eval(foo=expr) | ... you can now write ... | foo :=expr | .... Also, on the left hand side in an eval, you can write att := expr, which assigns the field that is the current value of att.

Humio Server 1.0.48 Archive (2018-02-19)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.48Archive2018-02-19

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Regular update release. Data migration is required: the backups are incompatible.

New features and improvements

  • Summary

    • Export to file. It is now possible to export the results of a query to a file. When exporting, the result set is not limited for filter queries, making it possible to export large amounts of data. Plain text, JSON and ND-JSON (Newline Delimited JSON) formats are supported in this version.

  • Functions

    • top() function. Find the most common values of a field.

    • format() function. Format a string using printf-style.

Fixed in this release

  • Summary

    • global-snapshots topic in Kafka: Humio now deletes the oldest snapshot after writing a new, keeping the latest 10 only.

    • Backup feature (using BACKUP_NAME in env) now stores files in a new format. If using this, you must either move the old files out of the way, or set BACKUP_NAME to a new value, thus pointing to an new backup directory. The new backup system will proceed to write a fresh backup in the designated folder. The new backup system no longer require use of "JCE policy files". Instead, it needs to run on java "1.8.0_161" or later. The current Humio docker images includes "1.8.0_162".

    • Performance improvement for searches using in particular "expensive" aggregates functions such as groupby and percentile.

Humio Server 1.0.47 Archive (2018-02-07)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.47Archive2018-02-07

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Regular update release.

Fixed in this release

  • Summary

    • Log4j2 updated from 2.9.1 to 2.10.0. If you are using a custom logging configuration, you may need to update your configuration accordingly.

    • To eliminate GC pauses caused by compression in the Kafka-client in Humio, Humio now disables compression on all topics used by Humio. Humio compresses internally before writing to Kafka on messages where compression is required (Ingest is compressed). This release of Humio enforces this setting onto the topics used by humio. This is the list of topics used by Humio. (Assuming you have not configured a prefix, which is in then used on all of them)

      global-events global-snapshots humio-ingest transientChatter-events

      You can check the current non-default settings using this command:

      cd SOME_KAFKA_INSTALL_DIR
      ./bin/kafka-configs.sh --zookeeper localhost:2181 --entity-type \
       topics --entity-name humio-ingest --describe
    • Removed GC pauses caused by java.util.zip.* native calls from compressed http-traffic triggering "GCLocker initiated GC", which could block the entire JVM for many seconds.

    • Reduced query state size for live queries decreasing memory usage.

    • Added concat() function.

Humio Server 1.0.46 Archive (2018-02-02)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.46Archive2018-02-02

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Minor update release.

Fixed in this release

  • Functions

    • rdns() function now runs asynchronously, looking up names in the background and caching the responses. Fast static queries may complete before the lookup completes. Push rdns as far right as possible in your queries, and avoid filtering events based on the result, as rdns is non-deterministic.

Humio Server 1.0.45 Archive (2018-02-01)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.45Archive2018-02-01

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Minor update release.

Fixed in this release

  • Summary

    • Improved performance on live queries with large internal states

Humio Server 1.0.44 Archive (2018-01-30)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.44Archive2018-01-30

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Regular update release. Data migration is required. Rollback to previous version is supported with no actions required.

Fixed in this release

  • Summary

    • If the "span" for a timechart is wider than the search interval, the default span is used and a warning is added. This improves zooming in time on dashboards.

    • Fix bug in live queries after restarting a host.

    • OnPrem: Configuration obsolete: No longer supports the KAFKA_HOST / KAFKA_PORT configuration parameters. Use the KAFKA_SERVERS configuration instead.

    • Added VictorOps notifier

    • Regular expression parsing limit is increased from 4K to 64K when ingesting events.

    • Added PagerDuty notifier

    • On timechart, mouse-over now displays series sorted by magnitude, and pretty-prints the numbers.

    • OnPrem: Size of query states are now bounded by the MAX_INTERNAL_STATESIZE, which defaults to MaxHeapSize/128.

Humio Server 1.0.43 Archive (2018-01-25)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.43Archive2018-01-25

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Minor update release.

Fixed in this release

  • Summary

    • Stop queries and warn if too big query states are detected

    • Warnings are less intrusive inn the UI.

Humio Server 1.0.42 Archive (2018-01-23)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.42Archive2018-01-23

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Minor update release.

Fixed in this release

  • Summary

    • Ingesting to a personal sandbox dataspace using ingest token was not working.

    • Firefox is now supported.

    • Added "tags" to "ingest-messages" endpoint to allow the source to add tags to the events. It is still possible and recommended to add the tags using the parser.

    • Added OpsGenie notification template

    • Support ANSI colors

  • Documentation

    • Added documentation of the file formats that the lookup() function is able to use.

  • Automation and Alerts

    • An alert could fire a notification on a partial query result, resulting in extra alerts being fired.

Humio Server 1.0.41 Archive (2018-01-19)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.41Archive2018-01-19

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Regular update release.

Fixed in this release

  • Summary

    • Fix bug #35 Preventing you from doing e.g. groupby for fields containing spaces or quotes in their field name.

    • New front page page. You can now jump directly to a dashboard from the front page using the dropdown on each list item. All dashboards can also be filtered and accessed from the "Dashboards Tab" on the front page.

    • For on-prems: You can now adjust BLOCKS_PER_SEGMENT from the default of .0 for influence on size of segment files.

    • New implementation of Query API for integration purposes.

    • Added suggestions on sizing of hardware to run Humio: Instance Sizing.

    • Better Page Titles for Browser History.

    • Startup time reduced when running on large datasets.

    • Multiple problems on the Parsers page have been fixed.

    • replace() function on rawstring@ now work also for live part of query.

    • More guidance for new users in the form of help messages and tooltips.

    • If Kafka did not respond for 5 seconds, ingested events could get duplicated inside humio.

    • Cancelled queries influenced performance after they were cancelled.

    • Renewing your API token from your account settings page.

  • Functions

    • sort() and table() function now supports sorting on multiple fields. Sort also supports sorting using "type=hex" as numbers when the field value starts with "(-)0x", or the type=hex argument is added.

Humio Server 1.0.40 Archive (2018-01-09)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.40Archive2018-01-09

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Minor update release.

Fixed in this release

  • Summary

    • Added option to do authentication in a http proxy in front of Humio, while letting Humio use the username provided by the proxy.

    • Fixed performance regression in latest release when querying, that hit in particular data sources with small events

  • Functions

    • percentile() function now accepts as parameter, allowing to plot multiple series as percentiles in a timechart.

Humio Server 1.0.39 Archive (2018-01-04)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.39Archive2018-01-04

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Regular update release.

Fixed in this release

  • Summary

    • Filebeat now utilises tags in parsers. The Filebeat configuration is still backward compatible.

    • Netflow support for on premises customers. It is now possible to send Netflow data directly to Humio. It is configured using Ingest listeners.

    • Tags can be defined in parsers (see Event Tags).

    • Tag sharding. A tag with many different values would result in a lot of small datasources which will hurt performance. A Tag will be sharded if it has many different values. For example, having a field user as tag and having .0.0 Different users could result in .0.0 datasources. Instead the tag will be sharded and allowed to have 16 different values (by default). In general do not use a field with high cardinality as a tag in Humio.

    • Root user management in the UI. A gear icon has been added next to the "Add Dataspace" button, if you are logged in as a root user. Press it and it is possible to manage users.

    • Better Zeek (Bro) Network Security Monitor integration.

    • Datasources are autosharded into multiple datasources if they have huge ingest loads. This is mostly an implementation detail.

  • Functions

Humio Server 1.0.38 Archive (2017-12-18)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.38Archive2017-12-18

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Minor update release.

Fixed in this release

  • Summary

    • Fixed a bug where the "parsers" page the fields found in parsing were hidden.

    • Fixed a bug that leaked Kafka-connections.

    • Turn off LZ4 on connection from Humio to kafka. Note: Storage of data in Kafka is controlled by broker settings, although having "producer" there will turn compression off now. The suggested Kafka broker (or topic) configuration is to have "compression.type=lz4"

Humio Server 1.0.37 Archive (2017-12-15)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.37Archive2017-12-15

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Minor update release.

Fixed in this release

  • Summary

    • Set default timechart(limit=20). This can cause some dashboards to display warnings.

Humio Server 1.0.36 Archive (2017-12-14)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.36Archive2017-12-14

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Regular update release.

Fixed in this release

  • Summary

    • Different View Modes have been made more prominent in the Search View by the addition of tabs at the top of the result view. As we extend the visualization to be more specialized for different types of logs we expect to add more Context Aware tabs here, as well as in the inspection panel at the bottom of the screen.

    • Styling improvement on several pages.

    • Event List Results are now horizontally scrollable, though limited in length for performance reasons.

    • Typo Corrections in the Tutorial

    • Performance improvements in timecharts.

    • New Search View functionality allows you to sort the event list to show newest events at the end of the list.

    • Syntax highlighting in the event list for certain formats including JSON.

    • Scrolling the event list or selecting an event will pause the result stream while you inspect the events, this especially makes it easier to look at Live Query results. Resume a stream by hitting Esc or clicking the button.

Humio Server 1.0.35 Archive (2017-12-13)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.35Archive2017-12-13

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Regular update release.

Fixed in this release

  • Summary

    • New parameter timechart(limit=N) chooses the "top N charts" selected as the charts with the most area under them. When unspecified, limit value defaults to .0, and produces a warning if exceeded. When specified explicitly, no warning is issued.

    • Filter functions can now generically be negated !/foo/, !cidr(...), !in(...), etc.

    • Upgraded to kafka 1.0. This is IMPORTANT for on premises installations. It requires updating the kafka docker image before updating the humio docker image

Humio Server 1.0.34 Archive (2017-12-11)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.34Archive2017-12-11

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Minor update release.

Fixed in this release

  • Summary

    • Tags can now be sharded, allowing to add e.g IP-addresses as tags. (Only for root users, ask your admin.)

    • Support datasources with large data volumes by splitting them into multiple internal datasources. (Only for root users, ask your admin.)

Humio Server 1.0.33 Archive (2017-12-07)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.33Archive2017-12-07

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Minor update release.

Fixed in this release

  • Summary

    • Kafka topic configuration defaults changed and documented. If running on-premises, please inspect and update the retention settings on the Kafka topics created by Humio to match your Kafka . See Kafka Configuration.

Humio Server 1.0.32 Archive (2017-12-06)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.32Archive2017-12-06

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Regular update release.

Fixed in this release

  • Summary

    • Improved ingest performance by batching requests more efficiently to the kafka ingest queue. Queue Serialization format changed as well

    • Fixed bug with some tables having narrow columns making text span many lines

    • Fixed bug in timechart graphs, The edge buckets made the graph go to much back in time and also into the future

    • New implementation of the timeChart() function with better performance.

    • When saving queries/alerts - the query currently in the search field is saved - not the last one that ran

Humio Server 1.0.31 Archive (2017-11-26)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.31Archive2017-11-26

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Minor update release.

Fixed in this release

  • Summary

    • Fixed failure to compile regexp in query was reported as an internal server error

    • Make Kafka producer settings relative to Java max heap size

    • Humio now sets a CSP header by default. You can still replace this header in your proxy if needed

Humio Server 1.0.30 Archive (2017-11-24)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.30Archive2017-11-24

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Minor update release.

Fixed in this release

  • Summary

    • Improve support for running Humio behind a proxy with CSP

    • Possible to specify tags for ingest listeners in the UI

    • Fix links to documentation when running behind a proxy

Humio Server 1.0.29 Archive (2017-11-21)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.29Archive2017-11-21

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Regular update release.

Fixed in this release

  • Summary

    • New sandbox dataspaces. Every user get their own sandbox dataspace. It is a personal dataspace, which can be handy for testing or quickly uploading some data

    • New interactive tutorial

    • Ui for adding ingest listeners (Only for root users)

    • Added pagination to tables

    • Fixed a couple of issues regarding syntax highlighting in the search field

Humio Server 1.0.28 Archive (2017-11-15)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.28Archive2017-11-15

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Regular update release.

Fixed in this release

  • Summary

    • Add documentation for new regular expression syntax

    • Fix bug with "save as" menu being hidden behind event distribution graph

    • Fix bug where Humio ignored the default search range specified for the dataspace

Humio Server 1.0.27 Archive (2017-11-14)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.27Archive2017-11-14

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Regular update release.

Fixed in this release

  • Summary

    • New Humio agent for Mesos and DC/OS

    • Possible to specify tags when using ingest listeners

    • Grafana integration. Check it out

    • Alerts are out of beta.

    • Improved Error handling when a host is slow. Should decrease the number of warnings

Humio Server 1.0.26 Archive (2017-11-09)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.26Archive2017-11-09

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Regular update release.

Fixed in this release

  • Summary

    • When no field is named, i.e. as in /err/i, then @rawstring is being searched.

    • When such a regex-match expression appears at top-level e.g. | between two bars |, then named capturing groups also cause new fields to be added to the output event as for the regex() function.

    • Performance has improved for most usages of regex (we have moved to use RE2/J rather than Java java.util.regex.)

    • New syntax field = /regex/idmg syntax for matching. Optional flags i=ignore case, m=multiline (change semantics of $ and ^ to match each line, nut just start/end), d=dotall (. includes \n), and g=same as repeat=true for the regex() function. I.e. to case-insensitively find all log lines containing err (or ERR, or Err) you can now search /err/i

    • A bug has been fixed where searching for unicode characters could cause false positives.

    • Improve syntax highlighting in search field

Humio Server 1.0.25 Archive (2017-11-06)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.25Archive2017-11-06

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Regular update release.

Fixed in this release

  • Summary

    • Anonymous Composite Function Calls can now make use of filter expressions: #type=accesslog | groupby(function={ uri=/foo* | count() })

    • Support for C-style allow comments // single line or /* multi line */

    • New HTTP Ingest API supporting parsers.

    • Saved queries can be invoked as a macro (see User Functions) using the following syntax: $"name of saved query"() or $nameOfSavedQuery(). Saved queries can declare arguments using ?{arg=defaultValue} syntax. Such arguments can be used where ever a string, number or identifier is allowed in the language. When calling a saved query, you can specify values for the arguments with a syntax like: $savedQuery(arg=value, otherArg=otherValue).

Humio Server 1.0.24 Archive (2017-11-01)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.24Archive2017-11-01

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Regular update release.

Fixed in this release

  • Summary

    • Event timestamps are set to Humio's current time, at ingestion, if they have timestamps in the future. These events will also be annotate with the fields @error=true and @error_msg='timestamp was set to a value in the future. Setting it to now'. Events are allowed to be at most 10 seconds into the future, to take into account some clock skew between machines.

    • Improved handling of server deployments in dashboards

    • Created a public github repository with scripts to support on-premises Humio installation and configuration.

    • Timecharts are redrawn when series are toggled

    • Fix bug with headline texts animating forever

Humio Server 1.0.23 Archive (2017-10-23)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.23Archive2017-10-23

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Minor release.

Fixed in this release

  • Summary

    • Fixed Bug in search field when pasting formatted text

    • Fixed Session timeout bug when logging in with LDAP

    • Better support for busting the browsers local cache on new releases

Humio Server 1.0.22 Archive (2017-10-17)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.22Archive2017-10-17

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Regular update release.

Fixed in this release

  • Summary

    • Added time range parameterization to dashboards

    • Fixed visual bug in the event distribution graph

  • Functions

    • The in() function now allows wildcards in its values parameter

Humio Server 1.0.21 Archive (2017-10-17)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.21Archive2017-10-17

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Regular update release.

Fixed in this release

  • Summary

    • Added syntax highlighting of the query in the search field.

    • Allow resizing the search field.

Humio Server 1.0.20 Archive (2017-10-13)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.20Archive2017-10-13

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Minor release.

Fixed in this release

  • Summary

    • A system job will now periodically compacts and merges small segment files (caused by low volume data sources) improving performance and reducing storage requirements.

    • Fixed bug showing basic authentication dialogue in browser when logging token expires

    • Add parameter cidr(negate=true|false) flag

    • Add ipv6 support to cidr

Humio Server 1.0.19 Archive (2017-10-11)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.19Archive2017-10-11

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Regular update release.

Fixed in this release

  • Summary

    • Mouse over in timecharts now displays values for all series in hovered bucket

    • Since using the ingest queue is on by default, if running a clustered setup, make sure to update the ingest partition assignments. At the very least reset them to defaults (see Cluster Management API).

Humio Server 1.0.18 Archive (2017-10-10)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.18Archive2017-10-10

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Cloud-only release.

Fixed in this release

  • Summary

    • Ingest queue is used by default (if not disabled)

    • Events are highlighted in the eventdistribution graph when they are hovered.

    • Improved Auth0 on-prem support.

    • Possible to migrate dataspaces from one Humio to another.

    • Heroku Log Format

    • Improved query scheduling for dashboards starting many queries at the same time.

  • Functions

Humio Server 1.0.17 Archive (2017-09-29)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.17Archive2017-09-29

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

This is a release for Humio Cloud users only.

Fixed in this release

  • Summary

    • Cloud-only release.

Humio Server 1.0.16 Archive (2017-09-06)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.16Archive2017-09-06

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Regular update release.

Fixed in this release

  • Summary

    • Fix bug with Events list view for aggregate queries

    • Generic UDP/TCP ingest added (for e.g. syslog). Config with HTTP/JSON API only, no GUI yet.

    • UI improvements with auto suggest / pop up documentation.

    • New LDAP config options adding adding ldap-search to AUTHENTICATION_METHOD for using a bind user.

    • Fix bug with combination of add-cluster-member and real-time-backup-enabled.

  • Functions

Humio Server 1.0.15 Archive (2017-08-30)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.15Archive2017-08-30

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Regular update release.

Fixed in this release

  • Summary

    • Copy dashboard feature

    • Improve Auth0 dependencies. (Better handling of communication problems)

    • Change styling of list widgets

    • Syslog ingestion (Line ingestion) in Beta for on premises installations

Humio Server 1.0.14 Archive (2017-08-17)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.14Archive2017-08-17

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Regular update release.

Fixed in this release

  • Summary

    • Make it possible to show event details, when looking at raw events inside a timechart (1438)

    • Show warning when there are too many points to plot in a timechart and some are discarded (1444)

    • Fix scrolling in safari for tables (1308)

Humio Server 1.0.13 Archive (2017-08-16)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.13Archive2017-08-16

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Regular update release.

Fixed in this release

  • Summary

    • Remember which tab to show in event details drawer (Same as the last one)

    • Documentation for cluster management operations

    • Dataspace type ahead filter on frontpage

    • Ingest request waits for 1 Kafka server to ack the request by default (improves data loss scenarios with machines failing)

    • Widget options now use radio buttons for many options

Humio Server 1.0.12 Archive (2017-08-04)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.12Archive2017-08-04

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Regular update release.

Fixed in this release

  • Summary

    • Background tabs are only updated minimally, resulting in much less CPU usage.

    • Fix an issue with scrollbars appearing in dashboards. (1403)

    • Various minor UI changes.

    • Fixed a bug that would prevent wiping the kafka used to run Humio. (1347, 1408)

    • New 'server connection indicator' shows that the server is currently reachable from the browser.

Humio Server 1.0.11 Archive (2017-07-09)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.11Archive2017-07-09

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Cloud-only release.

Fixed in this release

  • Summary

    • Improved the update logic for read-only dashboards (#1341)

    • Fix an issue where login fails and the UI hangs w/auth0. (#1368)

    • Improved rendering performance for dashboards (#1360)

    • When running an aggregate query (such as a groupby) the UI now shows an Events list tab to see the events that were selected as input to the aggregate.

Humio Server 1.0.10 Archive (2017-06-22)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.10Archive2017-06-22

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Regular update release.

Fixed in this release

  • Summary

    • Support for LDAP authentication for on-premises installations. (#1222)

    • For calculations on events containing numbers, the query engine now maintains a higher precision in intermediate results. Previously, numbers were limited to two decimal places, so now smaller numbers can show up in the UI. (#603)

    • Certain long queries could crash the system. (#781)

    • The event distribution graph is not aligned better with graphs shown below.

    • The limit parameter on table() and sort() functions now only issues a warning if the system limit is reached, not when the explicitly specified limit is reached. (#1323)

    • Various improvements in the scale-out implementation. Contact us for more detail if relevant.

    • Ingest requests are not rejected with an error, when incoming events contain fields reserved for humio (like @timestamp). Instead an @ is prepended to the field name and extra fields are added to the event describing the problem(@error=true). (#1320)

Humio Server 1.0.9 Archive (2017-06-15)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.9Archive2017-06-15

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Regular update release.

Fixed in this release

  • Summary

    • The event details view has been improved in various ways: remember height, new buttons for 'groupby attribute' and 'filter without'. (#1277)

    • In certain cases, live queries producing a warning would add the same warning repeatedly for every poll. (#1255)

    • While running a query, the UI will now indicate progress 0-.0%. (#1262)

    • The scale-out implementation is improved in several ways. Most significantly, functionality adding a node to a cluster has been added. Contact us for more detail if relevant.

    • Timecharts with span=1d now uses the browser timezone to determine day boundary. (#1250)

    • Fixed a bug where read-only dashboards allowed dragging/resizing widgets. (#1274)

    • Humio can optionally use re2j (Google regular expression implementation), which is slightly slower than the default Java version, but avoids some strange corner cases that can sometimes cause stackoverflow. Controlled with USE_JAVA_REGEX. Defaults to true.

    • For UI queries (and those using the queryjob API) the limit on the result set is lowered to 1.0 rows/events. This avoids the UI freezing in cases where a very large result set is generated. To get more than 1.0 results the query HTTP endpoint has to be used. (#1281, #960)

    • Add parameters unit and buckets to timeChart(). The parameter buckets allows users to specify a specific number of buckets (maximum 1.0) to split the query interval into, as an alternative to the span parameter which has issues when resizing the query interval. The unit parameter lets you convert rates by e.g. passing unit="bytes/bucket to Mibytes/hour". As the bucket (or span) value changes, the output is converted to the given output unit. (#1295)

Humio Server 1.0.8 Archive (2017-05-22)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.8Archive2017-05-22

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Major release includes early access for new multi-host scale-out functionality. See separate documentation for how to install and configure these functions.

Fixed in this release

  • Summary

    • Fixed a bug with time charts that did not always include the Plotline Y. (#1111)

    • Fixed a bug which made docs not redirect properly for on-prem installations (#1112)

    • Dashboards now indicate errors in the underlying queries with an transparent overlay (#775)

    • Fixed minor bug in parser selection (only used in undocumented tags selection mechanism)

    • For +2 seconds aggregate queries, shuffle order logs are processed. This lets the user get a rough estimate of the nature of the data, which works well for such queries using e.g. avg or percentiles aggregates. (#1227)

    • Fixed a bugs with live aggregate queries which could cause results to inflate over time. (#1213)

    • Dashboards can now be reconfigured by dragging and resizing widgets (#1205)

Humio Server 1.0.7 Archive (2017-05-04)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.7Archive2017-05-04

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Regular update release.

Fixed in this release

  • Summary

    • Improve scroll behavior in tables on dashboards (#1190)

    • Added UI to allow root users to set the retention on data spaces (#502)

    • New flag to groupby(limit=N) allows specifying the maximum number of groups (0 up to ∞). If more than N entries are present, elements not matching one of the existing are ignored and a warning is issued. The system has a hard limit of .00, which can be removed by the operator by setting the property ALLOW_UNLIMITED_GROUPS=true in the Humio configuration file (environment file for Docker). (#1199)

Humio Server 1.0.6 Archive (2017-04-27)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.0.6Archive2017-04-27

Cloud

2020-11-30No1.1.0No

Available for download two days after release.

Regular update release

Fixed in this release

  • Summary

    • Fixes for logarithmic scale graphs (#1111)

    • In the event-list view, a toggle has been added to enable line wrapping. (#1121)

    • Dashboard settings have been moved to the dataspace page, rather than on the front page (#1125)

    • Save metadata locally to the file global-data-snapshot.json rather than to the Kafka topic global-snapshots. This file should only be edited while the server is down, and even then with care.

    • Allow configuration for standard search interval other than 24h (#1149)